Test Report: Docker_Linux_containerd_arm64 20316

                    
                      afc1769d7af9cf0fbffe1101eacbcd6e5c84f215:2025-01-27:38084
                    
                

Test fail (2/330)

Order failed test Duration
248 TestScheduledStopUnix 38.59
307 TestStartStop/group/old-k8s-version/serial/SecondStart 377.22
x
+
TestScheduledStopUnix (38.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-403142 --memory=2048 --driver=docker  --container-runtime=containerd
E0127 02:42:06.565397 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-403142 --memory=2048 --driver=docker  --container-runtime=containerd: (33.411837461s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-403142 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-403142 -n scheduled-stop-403142
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-403142 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 3737384 running but should have been killed on reschedule of stop
panic.go:629: *** TestScheduledStopUnix FAILED at 2025-01-27 02:42:26.704111092 +0000 UTC m=+2112.324694908
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-403142
helpers_test.go:235: (dbg) docker inspect scheduled-stop-403142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ed9f87e1d5d08074c62f163f5255e997ce686d01b3d3d0c1c0d2d4b6fdd600b",
	        "Created": "2025-01-27T02:41:58.183616506Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3735450,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T02:41:58.347203991Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/9ed9f87e1d5d08074c62f163f5255e997ce686d01b3d3d0c1c0d2d4b6fdd600b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ed9f87e1d5d08074c62f163f5255e997ce686d01b3d3d0c1c0d2d4b6fdd600b/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ed9f87e1d5d08074c62f163f5255e997ce686d01b3d3d0c1c0d2d4b6fdd600b/hosts",
	        "LogPath": "/var/lib/docker/containers/9ed9f87e1d5d08074c62f163f5255e997ce686d01b3d3d0c1c0d2d4b6fdd600b/9ed9f87e1d5d08074c62f163f5255e997ce686d01b3d3d0c1c0d2d4b6fdd600b-json.log",
	        "Name": "/scheduled-stop-403142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-403142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-403142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/15bae22734b53d27129ce653c37ac409debbe222a5efa8f33abbe880d70ec95d-init/diff:/var/lib/docker/overlay2/5296668a0a30b38feb9159e191c47d5587ed9f36bb9a48e894c12f88095e8aab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15bae22734b53d27129ce653c37ac409debbe222a5efa8f33abbe880d70ec95d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15bae22734b53d27129ce653c37ac409debbe222a5efa8f33abbe880d70ec95d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15bae22734b53d27129ce653c37ac409debbe222a5efa8f33abbe880d70ec95d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-403142",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-403142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-403142",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-403142",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-403142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5babefdb6db16583dd57f5fc3adb4f18904ad77ee7f07b2779596fa22bab403e",
	            "SandboxKey": "/var/run/docker/netns/5babefdb6db1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37686"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37687"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37690"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37688"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37689"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-403142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d4f735f82bda2544cc2a505f6f918bac8bb135054ef3ef665f568475efd5db1b",
	                    "EndpointID": "65cd271e7591678f8062713a6d9868aa87c825102187ec522d70441f57ed8f2a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-403142",
	                        "9ed9f87e1d5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-403142 -n scheduled-stop-403142
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-403142 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-403142 logs -n 25: (1.394027868s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-136196            | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:35 UTC | 27 Jan 25 02:36 UTC |
	| start   | -p multinode-136196            | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:36 UTC | 27 Jan 25 02:37 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-136196       | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:37 UTC |                     |
	| node    | multinode-136196 node delete   | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:37 UTC | 27 Jan 25 02:38 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-136196 stop          | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:38 UTC | 27 Jan 25 02:38 UTC |
	| start   | -p multinode-136196            | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:38 UTC | 27 Jan 25 02:39 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| node    | list -p multinode-136196       | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:39 UTC |                     |
	| start   | -p multinode-136196-m02        | multinode-136196-m02  | jenkins | v1.35.0 | 27 Jan 25 02:39 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| start   | -p multinode-136196-m03        | multinode-136196-m03  | jenkins | v1.35.0 | 27 Jan 25 02:39 UTC | 27 Jan 25 02:39 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| node    | add -p multinode-136196        | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:39 UTC |                     |
	| delete  | -p multinode-136196-m03        | multinode-136196-m03  | jenkins | v1.35.0 | 27 Jan 25 02:39 UTC | 27 Jan 25 02:39 UTC |
	| delete  | -p multinode-136196            | multinode-136196      | jenkins | v1.35.0 | 27 Jan 25 02:39 UTC | 27 Jan 25 02:40 UTC |
	| start   | -p test-preload-965764         | test-preload-965764   | jenkins | v1.35.0 | 27 Jan 25 02:40 UTC | 27 Jan 25 02:41 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-965764 image pull | test-preload-965764   | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC | 27 Jan 25 02:41 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-965764         | test-preload-965764   | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC | 27 Jan 25 02:41 UTC |
	| start   | -p test-preload-965764         | test-preload-965764   | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC | 27 Jan 25 02:41 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| image   | test-preload-965764 image list | test-preload-965764   | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC | 27 Jan 25 02:41 UTC |
	| delete  | -p test-preload-965764         | test-preload-965764   | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC | 27 Jan 25 02:41 UTC |
	| start   | -p scheduled-stop-403142       | scheduled-stop-403142 | jenkins | v1.35.0 | 27 Jan 25 02:41 UTC | 27 Jan 25 02:42 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-403142       | scheduled-stop-403142 | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-403142       | scheduled-stop-403142 | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-403142       | scheduled-stop-403142 | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-403142       | scheduled-stop-403142 | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-403142       | scheduled-stop-403142 | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-403142       | scheduled-stop-403142 | jenkins | v1.35.0 | 27 Jan 25 02:42 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 02:41:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:41:52.825280 3734956 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:41:52.825419 3734956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:41:52.825423 3734956 out.go:358] Setting ErrFile to fd 2...
	I0127 02:41:52.825427 3734956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:41:52.825665 3734956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:41:52.826032 3734956 out.go:352] Setting JSON to false
	I0127 02:41:52.826930 3734956 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":91457,"bootTime":1737854256,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:41:52.826986 3734956 start.go:139] virtualization:  
	I0127 02:41:52.830948 3734956 out.go:177] * [scheduled-stop-403142] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 02:41:52.835463 3734956 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:41:52.835589 3734956 notify.go:220] Checking for updates...
	I0127 02:41:52.842162 3734956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:41:52.845402 3734956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:41:52.848471 3734956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:41:52.851698 3734956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 02:41:52.854817 3734956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:41:52.858269 3734956 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:41:52.883689 3734956 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:41:52.883832 3734956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:41:52.941852 3734956 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 02:41:52.932358168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:41:52.941948 3734956 docker.go:318] overlay module found
	I0127 02:41:52.947128 3734956 out.go:177] * Using the docker driver based on user configuration
	I0127 02:41:52.949986 3734956 start.go:297] selected driver: docker
	I0127 02:41:52.950001 3734956 start.go:901] validating driver "docker" against <nil>
	I0127 02:41:52.950014 3734956 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:41:52.950874 3734956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:41:53.009850 3734956 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 02:41:52.999947196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:41:53.010061 3734956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 02:41:53.010387 3734956 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 02:41:53.013443 3734956 out.go:177] * Using Docker driver with root privileges
	I0127 02:41:53.016429 3734956 cni.go:84] Creating CNI manager for ""
	I0127 02:41:53.016499 3734956 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:41:53.016506 3734956 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 02:41:53.016599 3734956 start.go:340] cluster config:
	{Name:scheduled-stop-403142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-403142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:41:53.019930 3734956 out.go:177] * Starting "scheduled-stop-403142" primary control-plane node in "scheduled-stop-403142" cluster
	I0127 02:41:53.022903 3734956 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 02:41:53.025933 3734956 out.go:177] * Pulling base image v0.0.46 ...
	I0127 02:41:53.028797 3734956 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:41:53.028848 3734956 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 02:41:53.028856 3734956 cache.go:56] Caching tarball of preloaded images
	I0127 02:41:53.028882 3734956 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 02:41:53.028967 3734956 preload.go:172] Found /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0127 02:41:53.028977 3734956 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 02:41:53.029362 3734956 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/config.json ...
	I0127 02:41:53.029383 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/config.json: {Name:mk78408a5703760627a4cabe26efc9b49c20c124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:41:53.050301 3734956 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 02:41:53.050313 3734956 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 02:41:53.050325 3734956 cache.go:230] Successfully downloaded all kic artifacts
	I0127 02:41:53.050358 3734956 start.go:360] acquireMachinesLock for scheduled-stop-403142: {Name:mk82b25d60d641edf186439411207ad6c043b1cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:41:53.050464 3734956 start.go:364] duration metric: took 92.051µs to acquireMachinesLock for "scheduled-stop-403142"
	I0127 02:41:53.050488 3734956 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-403142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-403142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 02:41:53.050555 3734956 start.go:125] createHost starting for "" (driver="docker")
	I0127 02:41:53.055794 3734956 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0127 02:41:53.056055 3734956 start.go:159] libmachine.API.Create for "scheduled-stop-403142" (driver="docker")
	I0127 02:41:53.056086 3734956 client.go:168] LocalClient.Create starting
	I0127 02:41:53.056174 3734956 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem
	I0127 02:41:53.056207 3734956 main.go:141] libmachine: Decoding PEM data...
	I0127 02:41:53.056223 3734956 main.go:141] libmachine: Parsing certificate...
	I0127 02:41:53.056273 3734956 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem
	I0127 02:41:53.056292 3734956 main.go:141] libmachine: Decoding PEM data...
	I0127 02:41:53.056301 3734956 main.go:141] libmachine: Parsing certificate...
	I0127 02:41:53.056668 3734956 cli_runner.go:164] Run: docker network inspect scheduled-stop-403142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 02:41:53.073313 3734956 cli_runner.go:211] docker network inspect scheduled-stop-403142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 02:41:53.073394 3734956 network_create.go:284] running [docker network inspect scheduled-stop-403142] to gather additional debugging logs...
	I0127 02:41:53.073409 3734956 cli_runner.go:164] Run: docker network inspect scheduled-stop-403142
	W0127 02:41:53.090682 3734956 cli_runner.go:211] docker network inspect scheduled-stop-403142 returned with exit code 1
	I0127 02:41:53.090709 3734956 network_create.go:287] error running [docker network inspect scheduled-stop-403142]: docker network inspect scheduled-stop-403142: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-403142 not found
	I0127 02:41:53.090723 3734956 network_create.go:289] output of [docker network inspect scheduled-stop-403142]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-403142 not found
	
	** /stderr **
	I0127 02:41:53.090829 3734956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 02:41:53.111882 3734956 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20c6b9faf740 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a5:84:e8:b3} reservation:<nil>}
	I0127 02:41:53.112395 3734956 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ed55a6afcd29 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ae:45:09:f0} reservation:<nil>}
	I0127 02:41:53.112940 3734956 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6d1bfb053f15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:0f:00:a9:30} reservation:<nil>}
	I0127 02:41:53.113541 3734956 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001949760}
	I0127 02:41:53.113569 3734956 network_create.go:124] attempt to create docker network scheduled-stop-403142 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 02:41:53.113668 3734956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-403142 scheduled-stop-403142
	I0127 02:41:53.189392 3734956 network_create.go:108] docker network scheduled-stop-403142 192.168.76.0/24 created
	I0127 02:41:53.189416 3734956 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-403142" container
	I0127 02:41:53.189500 3734956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 02:41:53.205242 3734956 cli_runner.go:164] Run: docker volume create scheduled-stop-403142 --label name.minikube.sigs.k8s.io=scheduled-stop-403142 --label created_by.minikube.sigs.k8s.io=true
	I0127 02:41:53.223595 3734956 oci.go:103] Successfully created a docker volume scheduled-stop-403142
	I0127 02:41:53.223685 3734956 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-403142-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-403142 --entrypoint /usr/bin/test -v scheduled-stop-403142:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 02:41:53.768580 3734956 oci.go:107] Successfully prepared a docker volume scheduled-stop-403142
	I0127 02:41:53.768626 3734956 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:41:53.768643 3734956 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 02:41:53.768713 3734956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-403142:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 02:41:58.116009 3734956 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-403142:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.347199391s)
	I0127 02:41:58.116030 3734956 kic.go:203] duration metric: took 4.347383623s to extract preloaded images to volume ...
	W0127 02:41:58.116177 3734956 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 02:41:58.116287 3734956 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 02:41:58.168820 3734956 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-403142 --name scheduled-stop-403142 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-403142 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-403142 --network scheduled-stop-403142 --ip 192.168.76.2 --volume scheduled-stop-403142:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 02:41:58.507913 3734956 cli_runner.go:164] Run: docker container inspect scheduled-stop-403142 --format={{.State.Running}}
	I0127 02:41:58.534091 3734956 cli_runner.go:164] Run: docker container inspect scheduled-stop-403142 --format={{.State.Status}}
	I0127 02:41:58.559793 3734956 cli_runner.go:164] Run: docker exec scheduled-stop-403142 stat /var/lib/dpkg/alternatives/iptables
	I0127 02:41:58.619804 3734956 oci.go:144] the created container "scheduled-stop-403142" has a running status.
	I0127 02:41:58.619825 3734956 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa...
	I0127 02:41:58.843080 3734956 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 02:41:58.880257 3734956 cli_runner.go:164] Run: docker container inspect scheduled-stop-403142 --format={{.State.Status}}
	I0127 02:41:58.911997 3734956 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 02:41:58.912008 3734956 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-403142 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 02:41:58.977955 3734956 cli_runner.go:164] Run: docker container inspect scheduled-stop-403142 --format={{.State.Status}}
	I0127 02:41:59.002352 3734956 machine.go:93] provisionDockerMachine start ...
	I0127 02:41:59.002441 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:41:59.025266 3734956 main.go:141] libmachine: Using SSH client type: native
	I0127 02:41:59.025554 3734956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37686 <nil> <nil>}
	I0127 02:41:59.025562 3734956 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:41:59.028502 3734956 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0127 02:42:02.154300 3734956 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-403142
	
	I0127 02:42:02.154316 3734956 ubuntu.go:169] provisioning hostname "scheduled-stop-403142"
	I0127 02:42:02.154396 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:02.172015 3734956 main.go:141] libmachine: Using SSH client type: native
	I0127 02:42:02.172262 3734956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37686 <nil> <nil>}
	I0127 02:42:02.172272 3734956 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-403142 && echo "scheduled-stop-403142" | sudo tee /etc/hostname
	I0127 02:42:02.309934 3734956 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-403142
	
	I0127 02:42:02.310005 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:02.328224 3734956 main.go:141] libmachine: Using SSH client type: native
	I0127 02:42:02.328472 3734956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37686 <nil> <nil>}
	I0127 02:42:02.328487 3734956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-403142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-403142/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-403142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:42:02.454082 3734956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:42:02.454120 3734956 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20316-3581420/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-3581420/.minikube}
	I0127 02:42:02.454151 3734956 ubuntu.go:177] setting up certificates
	I0127 02:42:02.454159 3734956 provision.go:84] configureAuth start
	I0127 02:42:02.454216 3734956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-403142
	I0127 02:42:02.473775 3734956 provision.go:143] copyHostCerts
	I0127 02:42:02.473835 3734956 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem, removing ...
	I0127 02:42:02.473842 3734956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem
	I0127 02:42:02.473919 3734956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem (1078 bytes)
	I0127 02:42:02.474021 3734956 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem, removing ...
	I0127 02:42:02.474025 3734956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem
	I0127 02:42:02.474050 3734956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem (1123 bytes)
	I0127 02:42:02.474255 3734956 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem, removing ...
	I0127 02:42:02.474260 3734956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem
	I0127 02:42:02.474289 3734956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem (1679 bytes)
	I0127 02:42:02.474364 3734956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-403142 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-403142]
	I0127 02:42:02.965230 3734956 provision.go:177] copyRemoteCerts
	I0127 02:42:02.965289 3734956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:42:02.965345 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:02.982267 3734956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37686 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa Username:docker}
	I0127 02:42:03.075330 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:42:03.100142 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0127 02:42:03.125607 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 02:42:03.152824 3734956 provision.go:87] duration metric: took 698.652707ms to configureAuth
	I0127 02:42:03.152845 3734956 ubuntu.go:193] setting minikube options for container-runtime
	I0127 02:42:03.153041 3734956 config.go:182] Loaded profile config "scheduled-stop-403142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:42:03.153047 3734956 machine.go:96] duration metric: took 4.150684598s to provisionDockerMachine
	I0127 02:42:03.153052 3734956 client.go:171] duration metric: took 10.096961148s to LocalClient.Create
	I0127 02:42:03.153066 3734956 start.go:167] duration metric: took 10.097012486s to libmachine.API.Create "scheduled-stop-403142"
	I0127 02:42:03.153072 3734956 start.go:293] postStartSetup for "scheduled-stop-403142" (driver="docker")
	I0127 02:42:03.153080 3734956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:42:03.153138 3734956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:42:03.153182 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:03.173590 3734956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37686 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa Username:docker}
	I0127 02:42:03.263129 3734956 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:42:03.266384 3734956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 02:42:03.266410 3734956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 02:42:03.266420 3734956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 02:42:03.266426 3734956 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 02:42:03.266436 3734956 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-3581420/.minikube/addons for local assets ...
	I0127 02:42:03.266494 3734956 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-3581420/.minikube/files for local assets ...
	I0127 02:42:03.266586 3734956 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem -> 35868002.pem in /etc/ssl/certs
	I0127 02:42:03.266719 3734956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:42:03.275007 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem --> /etc/ssl/certs/35868002.pem (1708 bytes)
	I0127 02:42:03.298634 3734956 start.go:296] duration metric: took 145.549397ms for postStartSetup
	I0127 02:42:03.299002 3734956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-403142
	I0127 02:42:03.316592 3734956 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/config.json ...
	I0127 02:42:03.316926 3734956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:42:03.316981 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:03.335042 3734956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37686 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa Username:docker}
	I0127 02:42:03.423288 3734956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 02:42:03.427786 3734956 start.go:128] duration metric: took 10.377215852s to createHost
	I0127 02:42:03.427802 3734956 start.go:83] releasing machines lock for "scheduled-stop-403142", held for 10.377330983s
	I0127 02:42:03.427880 3734956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-403142
	I0127 02:42:03.444967 3734956 ssh_runner.go:195] Run: cat /version.json
	I0127 02:42:03.445013 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:03.445025 3734956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:42:03.445075 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:03.467597 3734956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37686 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa Username:docker}
	I0127 02:42:03.470479 3734956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37686 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa Username:docker}
	I0127 02:42:03.692235 3734956 ssh_runner.go:195] Run: systemctl --version
	I0127 02:42:03.696495 3734956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 02:42:03.700663 3734956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 02:42:03.724587 3734956 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 02:42:03.724656 3734956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:42:03.755982 3734956 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 02:42:03.755995 3734956 start.go:495] detecting cgroup driver to use...
	I0127 02:42:03.756027 3734956 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 02:42:03.756076 3734956 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 02:42:03.768656 3734956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 02:42:03.780319 3734956 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:42:03.780372 3734956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:42:03.794525 3734956 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:42:03.809242 3734956 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:42:03.903374 3734956 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:42:03.998798 3734956 docker.go:233] disabling docker service ...
	I0127 02:42:03.998889 3734956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:42:04.023209 3734956 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:42:04.036170 3734956 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:42:04.130949 3734956 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:42:04.217343 3734956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:42:04.228915 3734956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:42:04.245417 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 02:42:04.256430 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 02:42:04.266294 3734956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 02:42:04.266354 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 02:42:04.276613 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:42:04.286452 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 02:42:04.297109 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:42:04.306931 3734956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:42:04.315970 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 02:42:04.325452 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 02:42:04.335325 3734956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 02:42:04.344732 3734956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:42:04.353456 3734956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:42:04.362236 3734956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:42:04.451378 3734956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 02:42:04.582239 3734956 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 02:42:04.582303 3734956 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:42:04.585908 3734956 start.go:563] Will wait 60s for crictl version
	I0127 02:42:04.585972 3734956 ssh_runner.go:195] Run: which crictl
	I0127 02:42:04.589268 3734956 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:42:04.625645 3734956 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0127 02:42:04.625704 3734956 ssh_runner.go:195] Run: containerd --version
	I0127 02:42:04.647295 3734956 ssh_runner.go:195] Run: containerd --version
	I0127 02:42:04.674729 3734956 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
	I0127 02:42:04.680170 3734956 cli_runner.go:164] Run: docker network inspect scheduled-stop-403142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 02:42:04.696520 3734956 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 02:42:04.700097 3734956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:42:04.711158 3734956 kubeadm.go:883] updating cluster {Name:scheduled-stop-403142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-403142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:42:04.711265 3734956 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:42:04.711321 3734956 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:42:04.745824 3734956 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:42:04.745836 3734956 containerd.go:534] Images already preloaded, skipping extraction
	I0127 02:42:04.745896 3734956 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:42:04.779709 3734956 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:42:04.779721 3734956 cache_images.go:84] Images are preloaded, skipping loading
	I0127 02:42:04.779728 3734956 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.1 containerd true true} ...
	I0127 02:42:04.779820 3734956 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-403142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-403142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:42:04.779892 3734956 ssh_runner.go:195] Run: sudo crictl info
	I0127 02:42:04.819147 3734956 cni.go:84] Creating CNI manager for ""
	I0127 02:42:04.819159 3734956 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:42:04.819166 3734956 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:42:04.819190 3734956 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-403142 NodeName:scheduled-stop-403142 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:42:04.819309 3734956 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "scheduled-stop-403142"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:42:04.819382 3734956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 02:42:04.828236 3734956 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:42:04.828302 3734956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:42:04.836998 3734956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0127 02:42:04.855156 3734956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:42:04.873482 3734956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
	I0127 02:42:04.892216 3734956 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 02:42:04.895722 3734956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:42:04.906608 3734956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:42:04.995593 3734956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:42:05.012262 3734956 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142 for IP: 192.168.76.2
	I0127 02:42:05.012278 3734956 certs.go:194] generating shared ca certs ...
	I0127 02:42:05.012298 3734956 certs.go:226] acquiring lock for ca certs: {Name:mk1bae14ef6af74439063c8478bc03213541b880 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:05.012452 3734956 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.key
	I0127 02:42:05.012494 3734956 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.key
	I0127 02:42:05.012500 3734956 certs.go:256] generating profile certs ...
	I0127 02:42:05.012557 3734956 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/client.key
	I0127 02:42:05.012578 3734956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/client.crt with IP's: []
	I0127 02:42:05.311072 3734956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/client.crt ...
	I0127 02:42:05.311088 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/client.crt: {Name:mk758880b9f27137c5825329bacf49448c5cad41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:05.311295 3734956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/client.key ...
	I0127 02:42:05.311304 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/client.key: {Name:mk879725613c34bf0afdb4f6b5fb32b1571d8bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:05.311399 3734956 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.key.017333d5
	I0127 02:42:05.311414 3734956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.crt.017333d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0127 02:42:05.663186 3734956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.crt.017333d5 ...
	I0127 02:42:05.663207 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.crt.017333d5: {Name:mk4ddb281a24b78e4c524508551aade5f3117495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:05.663399 3734956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.key.017333d5 ...
	I0127 02:42:05.663408 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.key.017333d5: {Name:mkf5223095261c2104865219194fdb2ac4add9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:05.663487 3734956 certs.go:381] copying /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.crt.017333d5 -> /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.crt
	I0127 02:42:05.663559 3734956 certs.go:385] copying /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.key.017333d5 -> /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.key
	I0127 02:42:05.663608 3734956 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.key
	I0127 02:42:05.663620 3734956 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.crt with IP's: []
	I0127 02:42:06.082450 3734956 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.crt ...
	I0127 02:42:06.082465 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.crt: {Name:mk367cb4694c7a90f6f0ab861c6ee8f7454d4167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:06.082653 3734956 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.key ...
	I0127 02:42:06.082661 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.key: {Name:mkc646a039146abb6bb4b51e8763ecab3439ae4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:06.082871 3734956 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800.pem (1338 bytes)
	W0127 02:42:06.082906 3734956 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800_empty.pem, impossibly tiny 0 bytes
	I0127 02:42:06.082913 3734956 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 02:42:06.082939 3734956 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:42:06.082961 3734956 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:42:06.082983 3734956 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem (1679 bytes)
	I0127 02:42:06.083025 3734956 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem (1708 bytes)
	I0127 02:42:06.083647 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:42:06.108966 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:42:06.133394 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:42:06.157386 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:42:06.181005 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 02:42:06.205492 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 02:42:06.229240 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:42:06.252121 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/scheduled-stop-403142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 02:42:06.275707 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem --> /usr/share/ca-certificates/35868002.pem (1708 bytes)
	I0127 02:42:06.300687 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:42:06.324800 3734956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800.pem --> /usr/share/ca-certificates/3586800.pem (1338 bytes)
	I0127 02:42:06.348835 3734956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:42:06.366508 3734956 ssh_runner.go:195] Run: openssl version
	I0127 02:42:06.372043 3734956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35868002.pem && ln -fs /usr/share/ca-certificates/35868002.pem /etc/ssl/certs/35868002.pem"
	I0127 02:42:06.381488 3734956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35868002.pem
	I0127 02:42:06.385148 3734956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:16 /usr/share/ca-certificates/35868002.pem
	I0127 02:42:06.385204 3734956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35868002.pem
	I0127 02:42:06.392107 3734956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35868002.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:42:06.401880 3734956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:42:06.411447 3734956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:42:06.415062 3734956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:42:06.415131 3734956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:42:06.422033 3734956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:42:06.431508 3734956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3586800.pem && ln -fs /usr/share/ca-certificates/3586800.pem /etc/ssl/certs/3586800.pem"
	I0127 02:42:06.440704 3734956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3586800.pem
	I0127 02:42:06.444205 3734956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:16 /usr/share/ca-certificates/3586800.pem
	I0127 02:42:06.444263 3734956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3586800.pem
	I0127 02:42:06.451258 3734956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3586800.pem /etc/ssl/certs/51391683.0"
	I0127 02:42:06.460514 3734956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:42:06.463730 3734956 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 02:42:06.463772 3734956 kubeadm.go:392] StartCluster: {Name:scheduled-stop-403142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-403142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:42:06.463846 3734956 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 02:42:06.463899 3734956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:42:06.500185 3734956 cri.go:89] found id: ""
	I0127 02:42:06.500250 3734956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:42:06.509364 3734956 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:42:06.518369 3734956 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 02:42:06.518434 3734956 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:42:06.528373 3734956 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:42:06.528392 3734956 kubeadm.go:157] found existing configuration files:
	
	I0127 02:42:06.528445 3734956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:42:06.537161 3734956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:42:06.537216 3734956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:42:06.545446 3734956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:42:06.554189 3734956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:42:06.554247 3734956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:42:06.563080 3734956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:42:06.572096 3734956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:42:06.572156 3734956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:42:06.580961 3734956 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:42:06.591494 3734956 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:42:06.591550 3734956 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:42:06.601325 3734956 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 02:42:06.648428 3734956 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 02:42:06.648479 3734956 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 02:42:06.671520 3734956 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 02:42:06.671587 3734956 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 02:42:06.671621 3734956 kubeadm.go:310] OS: Linux
	I0127 02:42:06.671666 3734956 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 02:42:06.671714 3734956 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 02:42:06.671760 3734956 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 02:42:06.671808 3734956 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 02:42:06.671856 3734956 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 02:42:06.671903 3734956 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 02:42:06.671947 3734956 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 02:42:06.672008 3734956 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 02:42:06.672062 3734956 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 02:42:06.734408 3734956 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 02:42:06.734513 3734956 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 02:42:06.734604 3734956 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 02:42:06.746024 3734956 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 02:42:06.752361 3734956 out.go:235]   - Generating certificates and keys ...
	I0127 02:42:06.752536 3734956 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 02:42:06.752606 3734956 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 02:42:06.971360 3734956 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 02:42:07.194436 3734956 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 02:42:08.143305 3734956 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 02:42:08.813266 3734956 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 02:42:09.288011 3734956 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 02:42:09.288291 3734956 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-403142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 02:42:09.603835 3734956 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 02:42:09.604106 3734956 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-403142] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 02:42:10.445654 3734956 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 02:42:11.208312 3734956 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 02:42:11.949736 3734956 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 02:42:11.949802 3734956 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 02:42:12.379384 3734956 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 02:42:13.177641 3734956 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 02:42:13.809051 3734956 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 02:42:14.092681 3734956 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 02:42:14.420653 3734956 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 02:42:14.421245 3734956 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 02:42:14.424139 3734956 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 02:42:14.427713 3734956 out.go:235]   - Booting up control plane ...
	I0127 02:42:14.427829 3734956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 02:42:14.427904 3734956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 02:42:14.427971 3734956 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 02:42:14.437630 3734956 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 02:42:14.443881 3734956 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 02:42:14.443929 3734956 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 02:42:14.550575 3734956 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 02:42:14.550693 3734956 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 02:42:16.543064 3734956 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.00095755s
	I0127 02:42:16.543164 3734956 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 02:42:23.044893 3734956 kubeadm.go:310] [api-check] The API server is healthy after 6.501864409s
	I0127 02:42:23.068902 3734956 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 02:42:23.091744 3734956 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 02:42:23.123632 3734956 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 02:42:23.123852 3734956 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-403142 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 02:42:23.135477 3734956 kubeadm.go:310] [bootstrap-token] Using token: qta9rx.je3htja9l90ga7qv
	I0127 02:42:23.138388 3734956 out.go:235]   - Configuring RBAC rules ...
	I0127 02:42:23.138525 3734956 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 02:42:23.143559 3734956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 02:42:23.155266 3734956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 02:42:23.159658 3734956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 02:42:23.164380 3734956 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 02:42:23.171458 3734956 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 02:42:23.452636 3734956 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 02:42:23.887839 3734956 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 02:42:24.452237 3734956 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 02:42:24.453414 3734956 kubeadm.go:310] 
	I0127 02:42:24.453481 3734956 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 02:42:24.453486 3734956 kubeadm.go:310] 
	I0127 02:42:24.453562 3734956 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 02:42:24.453566 3734956 kubeadm.go:310] 
	I0127 02:42:24.453590 3734956 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 02:42:24.453648 3734956 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 02:42:24.453698 3734956 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 02:42:24.453701 3734956 kubeadm.go:310] 
	I0127 02:42:24.453754 3734956 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 02:42:24.453758 3734956 kubeadm.go:310] 
	I0127 02:42:24.453804 3734956 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 02:42:24.453807 3734956 kubeadm.go:310] 
	I0127 02:42:24.453858 3734956 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 02:42:24.453933 3734956 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 02:42:24.454016 3734956 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 02:42:24.454020 3734956 kubeadm.go:310] 
	I0127 02:42:24.454153 3734956 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 02:42:24.454243 3734956 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 02:42:24.454248 3734956 kubeadm.go:310] 
	I0127 02:42:24.454330 3734956 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qta9rx.je3htja9l90ga7qv \
	I0127 02:42:24.454445 3734956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:83891a1b2b837c79fabbfd6fe62cd9786dc4221059a44014b5acb94babe950cd \
	I0127 02:42:24.454465 3734956 kubeadm.go:310] 	--control-plane 
	I0127 02:42:24.454468 3734956 kubeadm.go:310] 
	I0127 02:42:24.454552 3734956 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 02:42:24.454555 3734956 kubeadm.go:310] 
	I0127 02:42:24.454642 3734956 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qta9rx.je3htja9l90ga7qv \
	I0127 02:42:24.454748 3734956 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:83891a1b2b837c79fabbfd6fe62cd9786dc4221059a44014b5acb94babe950cd 
	I0127 02:42:24.459506 3734956 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 02:42:24.459781 3734956 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 02:42:24.459905 3734956 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 02:42:24.459965 3734956 cni.go:84] Creating CNI manager for ""
	I0127 02:42:24.459974 3734956 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:42:24.465005 3734956 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 02:42:24.467894 3734956 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 02:42:24.471890 3734956 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 02:42:24.471901 3734956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 02:42:24.491615 3734956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 02:42:24.784055 3734956 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 02:42:24.784183 3734956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:42:24.784260 3734956 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-403142 minikube.k8s.io/updated_at=2025_01_27T02_42_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=scheduled-stop-403142 minikube.k8s.io/primary=true
	I0127 02:42:24.975276 3734956 ops.go:34] apiserver oom_adj: -16
	I0127 02:42:24.975299 3734956 kubeadm.go:1113] duration metric: took 191.170998ms to wait for elevateKubeSystemPrivileges
	I0127 02:42:24.975336 3734956 kubeadm.go:394] duration metric: took 18.511566323s to StartCluster
	I0127 02:42:24.975354 3734956 settings.go:142] acquiring lock: {Name:mk735c76882f337c2ca62b3dd2d1bbcced4c92cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:24.975439 3734956 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:42:24.976116 3734956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/kubeconfig: {Name:mkc8ad8c78feebc7c27d31aea066c6fc5e1767bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:42:24.976340 3734956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 02:42:24.976343 3734956 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 02:42:24.976620 3734956 config.go:182] Loaded profile config "scheduled-stop-403142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:42:24.976676 3734956 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 02:42:24.976754 3734956 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-403142"
	I0127 02:42:24.976772 3734956 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-403142"
	I0127 02:42:24.976794 3734956 host.go:66] Checking if "scheduled-stop-403142" exists ...
	I0127 02:42:24.976822 3734956 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-403142"
	I0127 02:42:24.976836 3734956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-403142"
	I0127 02:42:24.977154 3734956 cli_runner.go:164] Run: docker container inspect scheduled-stop-403142 --format={{.State.Status}}
	I0127 02:42:24.977348 3734956 cli_runner.go:164] Run: docker container inspect scheduled-stop-403142 --format={{.State.Status}}
	I0127 02:42:24.979529 3734956 out.go:177] * Verifying Kubernetes components...
	I0127 02:42:24.986218 3734956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:42:25.016047 3734956 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-403142"
	I0127 02:42:25.016080 3734956 host.go:66] Checking if "scheduled-stop-403142" exists ...
	I0127 02:42:25.017253 3734956 cli_runner.go:164] Run: docker container inspect scheduled-stop-403142 --format={{.State.Status}}
	I0127 02:42:25.025409 3734956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:42:25.028462 3734956 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:42:25.028476 3734956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 02:42:25.028550 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:25.055991 3734956 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 02:42:25.056005 3734956 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 02:42:25.056085 3734956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-403142
	I0127 02:42:25.075120 3734956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37686 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa Username:docker}
	I0127 02:42:25.103561 3734956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37686 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/scheduled-stop-403142/id_rsa Username:docker}
	I0127 02:42:25.228061 3734956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 02:42:25.228164 3734956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:42:25.332141 3734956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:42:25.333416 3734956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:42:25.588091 3734956 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:42:25.588142 3734956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:42:25.588248 3734956 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0127 02:42:25.872065 3734956 api_server.go:72] duration metric: took 895.699269ms to wait for apiserver process to appear ...
	I0127 02:42:25.872074 3734956 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:42:25.872090 3734956 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 02:42:25.882574 3734956 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 02:42:25.883790 3734956 api_server.go:141] control plane version: v1.32.1
	I0127 02:42:25.883805 3734956 api_server.go:131] duration metric: took 11.725518ms to wait for apiserver health ...
	I0127 02:42:25.883811 3734956 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 02:42:25.890151 3734956 system_pods.go:59] 5 kube-system pods found
	I0127 02:42:25.890162 3734956 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 02:42:25.890175 3734956 system_pods.go:61] "etcd-scheduled-stop-403142" [402ba61e-1886-4358-a49a-8475e6884554] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 02:42:25.890182 3734956 system_pods.go:61] "kube-apiserver-scheduled-stop-403142" [b0c16e5c-c202-48c4-a5c6-0218b6873bbf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 02:42:25.890190 3734956 system_pods.go:61] "kube-controller-manager-scheduled-stop-403142" [77ef8281-a875-4b3c-b3ec-0e261121189f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 02:42:25.890197 3734956 system_pods.go:61] "kube-scheduler-scheduled-stop-403142" [2d2f4811-7ad1-469c-b111-d59f62e7cd06] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 02:42:25.890201 3734956 system_pods.go:61] "storage-provisioner" [029f640c-de68-47fd-96dd-88130384396c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0127 02:42:25.890212 3734956 system_pods.go:74] duration metric: took 6.38978ms to wait for pod list to return data ...
	I0127 02:42:25.890223 3734956 kubeadm.go:582] duration metric: took 913.860883ms to wait for: map[apiserver:true system_pods:true]
	I0127 02:42:25.890287 3734956 node_conditions.go:102] verifying NodePressure condition ...
	I0127 02:42:25.893530 3734956 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0127 02:42:25.893547 3734956 node_conditions.go:123] node cpu capacity is 2
	I0127 02:42:25.893557 3734956 node_conditions.go:105] duration metric: took 3.265592ms to run NodePressure ...
	I0127 02:42:25.893567 3734956 start.go:241] waiting for startup goroutines ...
	I0127 02:42:25.893577 3734956 addons.go:514] duration metric: took 916.905329ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 02:42:26.092439 3734956 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-403142" context rescaled to 1 replicas
	I0127 02:42:26.092462 3734956 start.go:246] waiting for cluster config update ...
	I0127 02:42:26.092472 3734956 start.go:255] writing updated cluster config ...
	I0127 02:42:26.092761 3734956 ssh_runner.go:195] Run: rm -f paused
	I0127 02:42:26.163306 3734956 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 02:42:26.166553 3734956 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-403142" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a44fa8c758070       7fc9d4aa817aa       11 seconds ago      Running             etcd                      0                   535344203a34f       etcd-scheduled-stop-403142
	c0bfa19c4f045       265c2dedf28ab       11 seconds ago      Running             kube-apiserver            0                   744b50547c1f7       kube-apiserver-scheduled-stop-403142
	f849e424becdd       ddb38cac617cb       11 seconds ago      Running             kube-scheduler            0                   2b92de8c36c84       kube-scheduler-scheduled-stop-403142
	578769dc997b6       2933761aa7ada       11 seconds ago      Running             kube-controller-manager   0                   9162c2291407b       kube-controller-manager-scheduled-stop-403142
	
	
	==> containerd <==
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.715142417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.724002217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.724169621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.724204451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.724442762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.801181947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-403142,Uid:1f780fd35d1d7676ed22895cddf3a275,Namespace:kube-system,Attempt:0,} returns sandbox id \"9162c2291407b6bc71814b1dc949db1f44a8954e877d19a9fb01c10160d44cda\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.818595135Z" level=info msg="CreateContainer within sandbox \"9162c2291407b6bc71814b1dc949db1f44a8954e877d19a9fb01c10160d44cda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.832172958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-403142,Uid:3568a00adda03d4c146e1649e4f29ad0,Namespace:kube-system,Attempt:0,} returns sandbox id \"744b50547c1f7b8f49e4cf2f035b0135e1d78312345520451b2c630de8131e54\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.832520961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-403142,Uid:db2cc2783bcbf55452c6c1530655d8b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b92de8c36c84677c4dd531bfe0cfdd01ef657b2e81676db11e92fd67531d9da\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.842300399Z" level=info msg="CreateContainer within sandbox \"744b50547c1f7b8f49e4cf2f035b0135e1d78312345520451b2c630de8131e54\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.844780748Z" level=info msg="CreateContainer within sandbox \"2b92de8c36c84677c4dd531bfe0cfdd01ef657b2e81676db11e92fd67531d9da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.849404259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-403142,Uid:39bcb589b15eebd4a33b2c34b9e7c266,Namespace:kube-system,Attempt:0,} returns sandbox id \"535344203a34f01855afbed02c886a93ba26e4787707b14288e2041c48376d31\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.852117021Z" level=info msg="CreateContainer within sandbox \"535344203a34f01855afbed02c886a93ba26e4787707b14288e2041c48376d31\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.866488923Z" level=info msg="CreateContainer within sandbox \"9162c2291407b6bc71814b1dc949db1f44a8954e877d19a9fb01c10160d44cda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"578769dc997b65c220226475f737b39eb343eb078715c90bdf342d323fbc7154\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.867469819Z" level=info msg="StartContainer for \"578769dc997b65c220226475f737b39eb343eb078715c90bdf342d323fbc7154\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.884265260Z" level=info msg="CreateContainer within sandbox \"2b92de8c36c84677c4dd531bfe0cfdd01ef657b2e81676db11e92fd67531d9da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f849e424becdd41a7dd850de9d77d34397cf4bec89568a4deba210a9cd74df63\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.885081934Z" level=info msg="StartContainer for \"f849e424becdd41a7dd850de9d77d34397cf4bec89568a4deba210a9cd74df63\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.887572130Z" level=info msg="CreateContainer within sandbox \"744b50547c1f7b8f49e4cf2f035b0135e1d78312345520451b2c630de8131e54\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c0bfa19c4f0453ba29cb932bd5aeb70ce08db42774743283f95a3087d7536aef\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.888998347Z" level=info msg="StartContainer for \"c0bfa19c4f0453ba29cb932bd5aeb70ce08db42774743283f95a3087d7536aef\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.919641413Z" level=info msg="CreateContainer within sandbox \"535344203a34f01855afbed02c886a93ba26e4787707b14288e2041c48376d31\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"a44fa8c758070cb77953e752b550835540f36e6dbeee0f7955466f54260280bf\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.927948489Z" level=info msg="StartContainer for \"a44fa8c758070cb77953e752b550835540f36e6dbeee0f7955466f54260280bf\""
	Jan 27 02:42:16 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:16.982361605Z" level=info msg="StartContainer for \"578769dc997b65c220226475f737b39eb343eb078715c90bdf342d323fbc7154\" returns successfully"
	Jan 27 02:42:17 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:17.052658268Z" level=info msg="StartContainer for \"c0bfa19c4f0453ba29cb932bd5aeb70ce08db42774743283f95a3087d7536aef\" returns successfully"
	Jan 27 02:42:17 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:17.083769661Z" level=info msg="StartContainer for \"f849e424becdd41a7dd850de9d77d34397cf4bec89568a4deba210a9cd74df63\" returns successfully"
	Jan 27 02:42:17 scheduled-stop-403142 containerd[830]: time="2025-01-27T02:42:17.176056207Z" level=info msg="StartContainer for \"a44fa8c758070cb77953e752b550835540f36e6dbeee0f7955466f54260280bf\" returns successfully"
	
	
	==> describe nodes <==
	Name:               scheduled-stop-403142
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-403142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=scheduled-stop-403142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T02_42_24_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 02:42:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-403142
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 02:42:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 02:42:21 +0000   Mon, 27 Jan 2025 02:42:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 02:42:21 +0000   Mon, 27 Jan 2025 02:42:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 02:42:21 +0000   Mon, 27 Jan 2025 02:42:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 02:42:21 +0000   Mon, 27 Jan 2025 02:42:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-403142
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c2e3a3fe730a4a1382563dc958dd4700
	  System UUID:                c9f2a712-e800-42ec-968f-c1f87c3a5dc4
	  Boot ID:                    ed5e2339-9d7b-4ad8-ab13-7fed1ac53390
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (5 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-403142                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-403142             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6s
	  kube-system                 kube-controller-manager-scheduled-stop-403142    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-403142             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 storage-provisioner                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node scheduled-stop-403142 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node scheduled-stop-403142 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x7 over 12s)  kubelet          Node scheduled-stop-403142 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s                 kubelet          Node scheduled-stop-403142 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet          Node scheduled-stop-403142 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet          Node scheduled-stop-403142 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           1s                 node-controller  Node scheduled-stop-403142 event: Registered Node scheduled-stop-403142 in Controller
	
	
	==> dmesg <==
	[Jan27 01:33] systemd-journald[221]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Jan27 01:42] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28/fs': -2
	
	
	==> etcd [a44fa8c758070cb77953e752b550835540f36e6dbeee0f7955466f54260280bf] <==
	{"level":"info","ts":"2025-01-27T02:42:17.294061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-01-27T02:42:17.294977Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-01-27T02:42:17.294466Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T02:42:17.295287Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-01-27T02:42:17.295387Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-01-27T02:42:17.446135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-01-27T02:42:17.446350Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-01-27T02:42:17.446503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-01-27T02:42:17.446586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-01-27T02:42:17.446669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-01-27T02:42:17.446753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-01-27T02:42:17.446840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-01-27T02:42:17.450255Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:42:17.454371Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:scheduled-stop-403142 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T02:42:17.454551Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T02:42:17.455129Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T02:42:17.456142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T02:42:17.457141Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T02:42:17.457392Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:42:17.457630Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:42:17.457737Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T02:42:17.458564Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T02:42:17.459237Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-01-27T02:42:17.459387Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T02:42:17.470425Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:42:28 up 1 day,  1:24,  0 users,  load average: 2.58, 2.33, 2.51
	Linux scheduled-stop-403142 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c0bfa19c4f0453ba29cb932bd5aeb70ce08db42774743283f95a3087d7536aef] <==
	I0127 02:42:20.970667       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 02:42:20.970756       1 cache.go:39] Caches are synced for autoregister controller
	I0127 02:42:21.016160       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 02:42:21.021918       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0127 02:42:21.050191       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0127 02:42:21.052145       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 02:42:21.052865       1 policy_source.go:240] refreshing policies
	I0127 02:42:21.082220       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 02:42:21.092166       1 controller.go:615] quota admission added evaluator for: namespaces
	E0127 02:42:21.145833       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0127 02:42:21.268333       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 02:42:21.813785       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0127 02:42:21.821164       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0127 02:42:21.822117       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 02:42:22.593763       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 02:42:22.647523       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 02:42:22.787865       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0127 02:42:22.795371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 02:42:22.796520       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 02:42:22.801692       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 02:42:22.963271       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 02:42:23.867661       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 02:42:23.885966       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 02:42:23.901227       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 02:42:28.217835       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [578769dc997b65c220226475f737b39eb343eb078715c90bdf342d323fbc7154] <==
	I0127 02:42:27.561828       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 02:42:27.562006       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 02:42:27.562705       1 shared_informer.go:320] Caches are synced for HPA
	I0127 02:42:27.563100       1 shared_informer.go:320] Caches are synced for job
	I0127 02:42:27.563300       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 02:42:27.563894       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 02:42:27.563391       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 02:42:27.563423       1 shared_informer.go:320] Caches are synced for deployment
	I0127 02:42:27.563437       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 02:42:27.563454       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 02:42:27.563476       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 02:42:27.570829       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 02:42:27.571328       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 02:42:27.571467       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 02:42:27.571283       1 shared_informer.go:320] Caches are synced for node
	I0127 02:42:27.572055       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 02:42:27.572198       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 02:42:27.572294       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 02:42:27.572384       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 02:42:27.594712       1 shared_informer.go:320] Caches are synced for disruption
	I0127 02:42:27.594847       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 02:42:27.607228       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-403142" podCIDRs=["10.244.0.0/24"]
	I0127 02:42:27.607262       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-403142"
	I0127 02:42:27.607500       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-403142"
	I0127 02:42:28.174569       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-403142"
	
	
	==> kube-scheduler [f849e424becdd41a7dd850de9d77d34397cf4bec89568a4deba210a9cd74df63] <==
	W0127 02:42:22.053852       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 02:42:22.053916       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.054039       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 02:42:22.054122       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.054217       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 02:42:22.054278       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.054369       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 02:42:22.054416       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.054523       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 02:42:22.054576       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.054649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 02:42:22.054701       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.054780       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 02:42:22.054833       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.055006       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 02:42:22.055060       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.055145       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 02:42:22.055192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.055328       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 02:42:22.055393       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.055428       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 02:42:22.055482       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 02:42:22.055719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 02:42:22.055785       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 02:42:23.143190       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: I0127 02:42:24.787798    1537 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: I0127 02:42:24.850920    1537 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-403142"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: I0127 02:42:24.851604    1537 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-403142"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: E0127 02:42:24.883823    1537 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-403142\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-403142"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: E0127 02:42:24.891999    1537 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-403142\" already exists" pod="kube-system/etcd-scheduled-stop-403142"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: I0127 02:42:24.902510    1537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-403142" podStartSLOduration=0.902475084 podStartE2EDuration="902.475084ms" podCreationTimestamp="2025-01-27 02:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 02:42:24.8875931 +0000 UTC m=+1.220565241" watchObservedRunningTime="2025-01-27 02:42:24.902475084 +0000 UTC m=+1.235447226"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: I0127 02:42:24.918346    1537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-403142" podStartSLOduration=0.918326329 podStartE2EDuration="918.326329ms" podCreationTimestamp="2025-01-27 02:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 02:42:24.90352865 +0000 UTC m=+1.236500800" watchObservedRunningTime="2025-01-27 02:42:24.918326329 +0000 UTC m=+1.251298479"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: I0127 02:42:24.940813    1537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-403142" podStartSLOduration=0.940789265 podStartE2EDuration="940.789265ms" podCreationTimestamp="2025-01-27 02:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 02:42:24.919162499 +0000 UTC m=+1.252134641" watchObservedRunningTime="2025-01-27 02:42:24.940789265 +0000 UTC m=+1.273761415"
	Jan 27 02:42:24 scheduled-stop-403142 kubelet[1537]: I0127 02:42:24.941151    1537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-403142" podStartSLOduration=2.9411435360000002 podStartE2EDuration="2.941143536s" podCreationTimestamp="2025-01-27 02:42:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 02:42:24.937402134 +0000 UTC m=+1.270374275" watchObservedRunningTime="2025-01-27 02:42:24.941143536 +0000 UTC m=+1.274115678"
	Jan 27 02:42:27 scheduled-stop-403142 kubelet[1537]: I0127 02:42:27.713037    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/029f640c-de68-47fd-96dd-88130384396c-tmp\") pod \"storage-provisioner\" (UID: \"029f640c-de68-47fd-96dd-88130384396c\") " pod="kube-system/storage-provisioner"
	Jan 27 02:42:27 scheduled-stop-403142 kubelet[1537]: I0127 02:42:27.713479    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqbl2\" (UniqueName: \"kubernetes.io/projected/029f640c-de68-47fd-96dd-88130384396c-kube-api-access-zqbl2\") pod \"storage-provisioner\" (UID: \"029f640c-de68-47fd-96dd-88130384396c\") " pod="kube-system/storage-provisioner"
	Jan 27 02:42:27 scheduled-stop-403142 kubelet[1537]: E0127 02:42:27.823878    1537 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 27 02:42:27 scheduled-stop-403142 kubelet[1537]: E0127 02:42:27.823916    1537 projected.go:194] Error preparing data for projected volume kube-api-access-zqbl2 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jan 27 02:42:27 scheduled-stop-403142 kubelet[1537]: E0127 02:42:27.823986    1537 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/029f640c-de68-47fd-96dd-88130384396c-kube-api-access-zqbl2 podName:029f640c-de68-47fd-96dd-88130384396c nodeName:}" failed. No retries permitted until 2025-01-27 02:42:28.323960838 +0000 UTC m=+4.656932980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zqbl2" (UniqueName: "kubernetes.io/projected/029f640c-de68-47fd-96dd-88130384396c-kube-api-access-zqbl2") pod "storage-provisioner" (UID: "029f640c-de68-47fd-96dd-88130384396c") : configmap "kube-root-ca.crt" not found
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420405    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3a9e3be-b238-49fc-848d-fa32ddb2cdb2-xtables-lock\") pod \"kube-proxy-t5wqc\" (UID: \"c3a9e3be-b238-49fc-848d-fa32ddb2cdb2\") " pod="kube-system/kube-proxy-t5wqc"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420461    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3a9e3be-b238-49fc-848d-fa32ddb2cdb2-kube-proxy\") pod \"kube-proxy-t5wqc\" (UID: \"c3a9e3be-b238-49fc-848d-fa32ddb2cdb2\") " pod="kube-system/kube-proxy-t5wqc"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420482    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e-lib-modules\") pod \"kindnet-lbl6c\" (UID: \"6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e\") " pod="kube-system/kindnet-lbl6c"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420522    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3a9e3be-b238-49fc-848d-fa32ddb2cdb2-lib-modules\") pod \"kube-proxy-t5wqc\" (UID: \"c3a9e3be-b238-49fc-848d-fa32ddb2cdb2\") " pod="kube-system/kube-proxy-t5wqc"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420543    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e-cni-cfg\") pod \"kindnet-lbl6c\" (UID: \"6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e\") " pod="kube-system/kindnet-lbl6c"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420561    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e-xtables-lock\") pod \"kindnet-lbl6c\" (UID: \"6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e\") " pod="kube-system/kindnet-lbl6c"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420580    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m78gx\" (UniqueName: \"kubernetes.io/projected/c3a9e3be-b238-49fc-848d-fa32ddb2cdb2-kube-api-access-m78gx\") pod \"kube-proxy-t5wqc\" (UID: \"c3a9e3be-b238-49fc-848d-fa32ddb2cdb2\") " pod="kube-system/kube-proxy-t5wqc"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: I0127 02:42:28.420600    1537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cfp\" (UniqueName: \"kubernetes.io/projected/6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e-kube-api-access-z8cfp\") pod \"kindnet-lbl6c\" (UID: \"6d8c4fd9-9ac5-4b5b-87b0-4aa8906d614e\") " pod="kube-system/kindnet-lbl6c"
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: E0127 02:42:28.420748    1537 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: E0127 02:42:28.420770    1537 projected.go:194] Error preparing data for projected volume kube-api-access-zqbl2 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jan 27 02:42:28 scheduled-stop-403142 kubelet[1537]: E0127 02:42:28.420819    1537 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/029f640c-de68-47fd-96dd-88130384396c-kube-api-access-zqbl2 podName:029f640c-de68-47fd-96dd-88130384396c nodeName:}" failed. No retries permitted until 2025-01-27 02:42:29.420795526 +0000 UTC m=+5.753767667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zqbl2" (UniqueName: "kubernetes.io/projected/029f640c-de68-47fd-96dd-88130384396c-kube-api-access-zqbl2") pod "storage-provisioner" (UID: "029f640c-de68-47fd-96dd-88130384396c") : configmap "kube-root-ca.crt" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-403142 -n scheduled-stop-403142
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-403142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-pc54p kindnet-lbl6c kube-proxy-t5wqc storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-403142 describe pod coredns-668d6bf9bc-pc54p kindnet-lbl6c kube-proxy-t5wqc storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-403142 describe pod coredns-668d6bf9bc-pc54p kindnet-lbl6c kube-proxy-t5wqc storage-provisioner: exit status 1 (105.252834ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-pc54p" not found
	Error from server (NotFound): pods "kindnet-lbl6c" not found
	Error from server (NotFound): pods "kube-proxy-t5wqc" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-403142 describe pod coredns-668d6bf9bc-pc54p kindnet-lbl6c kube-proxy-t5wqc storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-403142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-403142
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-403142: (2.022319439s)
--- FAIL: TestScheduledStopUnix (38.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-949994 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-949994 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.926387341s)

                                                
                                                
-- stdout --
	* [old-k8s-version-949994] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-949994" primary control-plane node in "old-k8s-version-949994" cluster
	* Pulling base image v0.0.46 ...
	* Restarting existing docker container for "old-k8s-version-949994" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-949994 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:53:23.329081 3796111 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:53:23.329301 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:53:23.329328 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:53:23.329347 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:53:23.329620 3796111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:53:23.330051 3796111 out.go:352] Setting JSON to false
	I0127 02:53:23.331073 3796111 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":92147,"bootTime":1737854256,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:53:23.331175 3796111 start.go:139] virtualization:  
	I0127 02:53:23.334733 3796111 out.go:177] * [old-k8s-version-949994] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 02:53:23.338614 3796111 notify.go:220] Checking for updates...
	I0127 02:53:23.341995 3796111 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:53:23.345117 3796111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:53:23.348029 3796111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:53:23.350915 3796111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:53:23.353813 3796111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 02:53:23.356744 3796111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:53:23.360206 3796111 config.go:182] Loaded profile config "old-k8s-version-949994": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 02:53:23.363617 3796111 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 02:53:23.366383 3796111 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:53:23.395585 3796111 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:53:23.395716 3796111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:53:23.454502 3796111 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:56 SystemTime:2025-01-27 02:53:23.44464573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:53:23.454621 3796111 docker.go:318] overlay module found
	I0127 02:53:23.457823 3796111 out.go:177] * Using the docker driver based on existing profile
	I0127 02:53:23.460620 3796111 start.go:297] selected driver: docker
	I0127 02:53:23.460642 3796111 start.go:901] validating driver "docker" against &{Name:old-k8s-version-949994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-949994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:53:23.460758 3796111 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:53:23.461493 3796111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:53:23.515773 3796111 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:56 SystemTime:2025-01-27 02:53:23.506204627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:53:23.516173 3796111 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:53:23.516205 3796111 cni.go:84] Creating CNI manager for ""
	I0127 02:53:23.516254 3796111 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:53:23.516296 3796111 start.go:340] cluster config:
	{Name:old-k8s-version-949994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-949994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:53:23.519637 3796111 out.go:177] * Starting "old-k8s-version-949994" primary control-plane node in "old-k8s-version-949994" cluster
	I0127 02:53:23.522414 3796111 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 02:53:23.525360 3796111 out.go:177] * Pulling base image v0.0.46 ...
	I0127 02:53:23.528155 3796111 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 02:53:23.528214 3796111 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 02:53:23.528233 3796111 cache.go:56] Caching tarball of preloaded images
	I0127 02:53:23.528241 3796111 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 02:53:23.528326 3796111 preload.go:172] Found /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0127 02:53:23.528338 3796111 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0127 02:53:23.528447 3796111 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/config.json ...
	I0127 02:53:23.547260 3796111 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 02:53:23.547284 3796111 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 02:53:23.547299 3796111 cache.go:230] Successfully downloaded all kic artifacts
	I0127 02:53:23.547329 3796111 start.go:360] acquireMachinesLock for old-k8s-version-949994: {Name:mk8caef1ff0c794c9f7daffab3099799a376ca86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:53:23.547387 3796111 start.go:364] duration metric: took 34.961µs to acquireMachinesLock for "old-k8s-version-949994"
	I0127 02:53:23.547412 3796111 start.go:96] Skipping create...Using existing machine configuration
	I0127 02:53:23.547423 3796111 fix.go:54] fixHost starting: 
	I0127 02:53:23.547681 3796111 cli_runner.go:164] Run: docker container inspect old-k8s-version-949994 --format={{.State.Status}}
	I0127 02:53:23.564213 3796111 fix.go:112] recreateIfNeeded on old-k8s-version-949994: state=Stopped err=<nil>
	W0127 02:53:23.564244 3796111 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 02:53:23.567465 3796111 out.go:177] * Restarting existing docker container for "old-k8s-version-949994" ...
	I0127 02:53:23.570239 3796111 cli_runner.go:164] Run: docker start old-k8s-version-949994
	I0127 02:53:23.908279 3796111 cli_runner.go:164] Run: docker container inspect old-k8s-version-949994 --format={{.State.Status}}
	I0127 02:53:23.929410 3796111 kic.go:430] container "old-k8s-version-949994" state is running.
	I0127 02:53:23.930718 3796111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-949994
	I0127 02:53:23.950912 3796111 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/config.json ...
	I0127 02:53:23.951386 3796111 machine.go:93] provisionDockerMachine start ...
	I0127 02:53:23.951460 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:23.971656 3796111 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:23.972046 3796111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37781 <nil> <nil>}
	I0127 02:53:23.972059 3796111 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:53:23.973030 3796111 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0127 02:53:27.097504 3796111 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-949994
	
	I0127 02:53:27.097530 3796111 ubuntu.go:169] provisioning hostname "old-k8s-version-949994"
	I0127 02:53:27.097604 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:27.114778 3796111 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:27.115036 3796111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37781 <nil> <nil>}
	I0127 02:53:27.115056 3796111 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-949994 && echo "old-k8s-version-949994" | sudo tee /etc/hostname
	I0127 02:53:27.258253 3796111 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-949994
	
	I0127 02:53:27.258347 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:27.285966 3796111 main.go:141] libmachine: Using SSH client type: native
	I0127 02:53:27.286392 3796111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37781 <nil> <nil>}
	I0127 02:53:27.286422 3796111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-949994' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-949994/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-949994' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:53:27.430186 3796111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:53:27.430216 3796111 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20316-3581420/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-3581420/.minikube}
	I0127 02:53:27.430238 3796111 ubuntu.go:177] setting up certificates
	I0127 02:53:27.430250 3796111 provision.go:84] configureAuth start
	I0127 02:53:27.430315 3796111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-949994
	I0127 02:53:27.448032 3796111 provision.go:143] copyHostCerts
	I0127 02:53:27.448093 3796111 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem, removing ...
	I0127 02:53:27.448102 3796111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem
	I0127 02:53:27.448175 3796111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem (1078 bytes)
	I0127 02:53:27.449140 3796111 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem, removing ...
	I0127 02:53:27.449154 3796111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem
	I0127 02:53:27.449215 3796111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem (1123 bytes)
	I0127 02:53:27.449307 3796111 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem, removing ...
	I0127 02:53:27.449313 3796111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem
	I0127 02:53:27.449339 3796111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem (1679 bytes)
	I0127 02:53:27.452926 3796111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-949994 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-949994]
	I0127 02:53:28.668709 3796111 provision.go:177] copyRemoteCerts
	I0127 02:53:28.668966 3796111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:53:28.669038 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:28.698812 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:28.791134 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 02:53:28.819720 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:53:28.852918 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 02:53:28.887669 3796111 provision.go:87] duration metric: took 1.457405722s to configureAuth
	I0127 02:53:28.887740 3796111 ubuntu.go:193] setting minikube options for container-runtime
	I0127 02:53:28.887974 3796111 config.go:182] Loaded profile config "old-k8s-version-949994": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 02:53:28.888004 3796111 machine.go:96] duration metric: took 4.936609679s to provisionDockerMachine
	I0127 02:53:28.888027 3796111 start.go:293] postStartSetup for "old-k8s-version-949994" (driver="docker")
	I0127 02:53:28.888050 3796111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:53:28.888134 3796111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:53:28.888196 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:28.918335 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:29.017587 3796111 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:53:29.021291 3796111 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 02:53:29.021331 3796111 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 02:53:29.021343 3796111 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 02:53:29.021352 3796111 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 02:53:29.021362 3796111 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-3581420/.minikube/addons for local assets ...
	I0127 02:53:29.021426 3796111 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-3581420/.minikube/files for local assets ...
	I0127 02:53:29.021507 3796111 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem -> 35868002.pem in /etc/ssl/certs
	I0127 02:53:29.021618 3796111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:53:29.033520 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem --> /etc/ssl/certs/35868002.pem (1708 bytes)
	I0127 02:53:29.069431 3796111 start.go:296] duration metric: took 181.376719ms for postStartSetup
	I0127 02:53:29.069539 3796111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:53:29.069584 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:29.086748 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:29.176151 3796111 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 02:53:29.181203 3796111 fix.go:56] duration metric: took 5.633772041s for fixHost
	I0127 02:53:29.181228 3796111 start.go:83] releasing machines lock for "old-k8s-version-949994", held for 5.633827514s
	I0127 02:53:29.181300 3796111 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-949994
	I0127 02:53:29.198286 3796111 ssh_runner.go:195] Run: cat /version.json
	I0127 02:53:29.198347 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:29.198622 3796111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:53:29.198697 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:29.218942 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:29.240970 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:29.473885 3796111 ssh_runner.go:195] Run: systemctl --version
	I0127 02:53:29.478870 3796111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 02:53:29.483506 3796111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 02:53:29.500844 3796111 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 02:53:29.500958 3796111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:53:29.509874 3796111 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 02:53:29.509941 3796111 start.go:495] detecting cgroup driver to use...
	I0127 02:53:29.509987 3796111 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 02:53:29.510064 3796111 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 02:53:29.524775 3796111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 02:53:29.536649 3796111 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:53:29.536748 3796111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:53:29.549950 3796111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:53:29.561639 3796111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:53:29.645297 3796111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:53:29.734323 3796111 docker.go:233] disabling docker service ...
	I0127 02:53:29.734440 3796111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:53:29.747766 3796111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:53:29.759061 3796111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:53:29.852247 3796111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:53:29.935040 3796111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:53:29.946316 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:53:29.963146 3796111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0127 02:53:29.972975 3796111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 02:53:29.982718 3796111 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 02:53:29.982793 3796111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 02:53:29.992718 3796111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:53:30.005663 3796111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 02:53:30.021465 3796111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:53:30.040260 3796111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:53:30.051245 3796111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 02:53:30.062930 3796111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:53:30.074437 3796111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:53:30.084203 3796111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:53:30.172078 3796111 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 02:53:30.362861 3796111 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 02:53:30.362982 3796111 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:53:30.366677 3796111 start.go:563] Will wait 60s for crictl version
	I0127 02:53:30.366780 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:53:30.370077 3796111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:53:30.406531 3796111 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0127 02:53:30.406697 3796111 ssh_runner.go:195] Run: containerd --version
	I0127 02:53:30.433813 3796111 ssh_runner.go:195] Run: containerd --version
	I0127 02:53:30.462236 3796111 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	I0127 02:53:30.465289 3796111 cli_runner.go:164] Run: docker network inspect old-k8s-version-949994 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 02:53:30.481479 3796111 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 02:53:30.484890 3796111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:53:30.495309 3796111 kubeadm.go:883] updating cluster {Name:old-k8s-version-949994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-949994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:53:30.495428 3796111 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 02:53:30.495490 3796111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:53:30.534120 3796111 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:53:30.534142 3796111 containerd.go:534] Images already preloaded, skipping extraction
	I0127 02:53:30.534201 3796111 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:53:30.568738 3796111 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:53:30.568758 3796111 cache_images.go:84] Images are preloaded, skipping loading
	I0127 02:53:30.568766 3796111 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0127 02:53:30.568878 3796111 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-949994 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-949994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:53:30.568940 3796111 ssh_runner.go:195] Run: sudo crictl info
	I0127 02:53:30.609531 3796111 cni.go:84] Creating CNI manager for ""
	I0127 02:53:30.609564 3796111 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:53:30.609575 3796111 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:53:30.609627 3796111 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-949994 NodeName:old-k8s-version-949994 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 02:53:30.609800 3796111 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-949994"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:53:30.609885 3796111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 02:53:30.619581 3796111 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:53:30.619678 3796111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:53:30.628059 3796111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0127 02:53:30.645266 3796111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:53:30.663024 3796111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0127 02:53:30.680785 3796111 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 02:53:30.684131 3796111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:53:30.694941 3796111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:53:30.783185 3796111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:53:30.797915 3796111 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994 for IP: 192.168.76.2
	I0127 02:53:30.797946 3796111 certs.go:194] generating shared ca certs ...
	I0127 02:53:30.797963 3796111 certs.go:226] acquiring lock for ca certs: {Name:mk1bae14ef6af74439063c8478bc03213541b880 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:53:30.798190 3796111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.key
	I0127 02:53:30.798244 3796111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.key
	I0127 02:53:30.798257 3796111 certs.go:256] generating profile certs ...
	I0127 02:53:30.798344 3796111 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.key
	I0127 02:53:30.798414 3796111 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/apiserver.key.05cfac9d
	I0127 02:53:30.798463 3796111 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/proxy-client.key
	I0127 02:53:30.798578 3796111 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800.pem (1338 bytes)
	W0127 02:53:30.798610 3796111 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800_empty.pem, impossibly tiny 0 bytes
	I0127 02:53:30.798624 3796111 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 02:53:30.798648 3796111 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:53:30.798676 3796111 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:53:30.798700 3796111 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem (1679 bytes)
	I0127 02:53:30.798745 3796111 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem (1708 bytes)
	I0127 02:53:30.799347 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:53:30.830541 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:53:30.855768 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:53:30.880883 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:53:30.906006 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 02:53:30.930978 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 02:53:30.958628 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:53:30.986129 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 02:53:31.013051 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:53:31.037419 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800.pem --> /usr/share/ca-certificates/3586800.pem (1338 bytes)
	I0127 02:53:31.061991 3796111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem --> /usr/share/ca-certificates/35868002.pem (1708 bytes)
	I0127 02:53:31.087131 3796111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:53:31.106191 3796111 ssh_runner.go:195] Run: openssl version
	I0127 02:53:31.115025 3796111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3586800.pem && ln -fs /usr/share/ca-certificates/3586800.pem /etc/ssl/certs/3586800.pem"
	I0127 02:53:31.125502 3796111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3586800.pem
	I0127 02:53:31.129126 3796111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:16 /usr/share/ca-certificates/3586800.pem
	I0127 02:53:31.129192 3796111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3586800.pem
	I0127 02:53:31.136768 3796111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3586800.pem /etc/ssl/certs/51391683.0"
	I0127 02:53:31.145984 3796111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35868002.pem && ln -fs /usr/share/ca-certificates/35868002.pem /etc/ssl/certs/35868002.pem"
	I0127 02:53:31.155353 3796111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35868002.pem
	I0127 02:53:31.159092 3796111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:16 /usr/share/ca-certificates/35868002.pem
	I0127 02:53:31.159208 3796111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35868002.pem
	I0127 02:53:31.166054 3796111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35868002.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:53:31.175068 3796111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:53:31.184641 3796111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:53:31.188178 3796111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:53:31.188251 3796111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:53:31.195159 3796111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:53:31.204090 3796111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:53:31.207625 3796111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 02:53:31.214435 3796111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 02:53:31.221319 3796111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 02:53:31.228244 3796111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 02:53:31.235165 3796111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 02:53:31.241993 3796111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 02:53:31.249701 3796111 kubeadm.go:392] StartCluster: {Name:old-k8s-version-949994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-949994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:53:31.249800 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 02:53:31.249870 3796111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:53:31.287282 3796111 cri.go:89] found id: "c13cdebc5015733e24bab16edcf17edc46267951f0fd3d8422baef4ecc5b4eb1"
	I0127 02:53:31.287357 3796111 cri.go:89] found id: "2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:53:31.287366 3796111 cri.go:89] found id: "17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:53:31.287371 3796111 cri.go:89] found id: "a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:53:31.287374 3796111 cri.go:89] found id: "f2925631dd795b8801f2679b3e0abf7fb3982c9a2992d20dab2cd4fe3d74a687"
	I0127 02:53:31.287378 3796111 cri.go:89] found id: "d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:53:31.287381 3796111 cri.go:89] found id: "8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:53:31.287384 3796111 cri.go:89] found id: "0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:53:31.287387 3796111 cri.go:89] found id: "ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:53:31.287394 3796111 cri.go:89] found id: ""
	I0127 02:53:31.287451 3796111 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 02:53:31.301632 3796111 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T02:53:31Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 02:53:31.301735 3796111 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:53:31.311428 3796111 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 02:53:31.311492 3796111 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 02:53:31.311572 3796111 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 02:53:31.320881 3796111 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 02:53:31.321348 3796111 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-949994" does not appear in /home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:53:31.321462 3796111 kubeconfig.go:62] /home/jenkins/minikube-integration/20316-3581420/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-949994" cluster setting kubeconfig missing "old-k8s-version-949994" context setting]
	I0127 02:53:31.321736 3796111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/kubeconfig: {Name:mkc8ad8c78feebc7c27d31aea066c6fc5e1767bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:53:31.322963 3796111 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 02:53:31.333842 3796111 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0127 02:53:31.333916 3796111 kubeadm.go:597] duration metric: took 22.403517ms to restartPrimaryControlPlane
	I0127 02:53:31.333933 3796111 kubeadm.go:394] duration metric: took 84.242283ms to StartCluster
	I0127 02:53:31.333948 3796111 settings.go:142] acquiring lock: {Name:mk735c76882f337c2ca62b3dd2d1bbcced4c92cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:53:31.334023 3796111 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:53:31.334803 3796111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/kubeconfig: {Name:mkc8ad8c78feebc7c27d31aea066c6fc5e1767bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:53:31.335028 3796111 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 02:53:31.335399 3796111 config.go:182] Loaded profile config "old-k8s-version-949994": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 02:53:31.335443 3796111 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 02:53:31.335514 3796111 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-949994"
	I0127 02:53:31.335541 3796111 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-949994"
	W0127 02:53:31.335553 3796111 addons.go:247] addon storage-provisioner should already be in state true
	I0127 02:53:31.335544 3796111 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-949994"
	I0127 02:53:31.335624 3796111 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-949994"
	I0127 02:53:31.335575 3796111 host.go:66] Checking if "old-k8s-version-949994" exists ...
	I0127 02:53:31.336035 3796111 cli_runner.go:164] Run: docker container inspect old-k8s-version-949994 --format={{.State.Status}}
	I0127 02:53:31.335579 3796111 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-949994"
	I0127 02:53:31.336554 3796111 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-949994"
	W0127 02:53:31.336565 3796111 addons.go:247] addon metrics-server should already be in state true
	I0127 02:53:31.336578 3796111 cli_runner.go:164] Run: docker container inspect old-k8s-version-949994 --format={{.State.Status}}
	I0127 02:53:31.336587 3796111 host.go:66] Checking if "old-k8s-version-949994" exists ...
	I0127 02:53:31.336999 3796111 cli_runner.go:164] Run: docker container inspect old-k8s-version-949994 --format={{.State.Status}}
	I0127 02:53:31.335584 3796111 addons.go:69] Setting dashboard=true in profile "old-k8s-version-949994"
	I0127 02:53:31.341037 3796111 addons.go:238] Setting addon dashboard=true in "old-k8s-version-949994"
	W0127 02:53:31.341050 3796111 addons.go:247] addon dashboard should already be in state true
	I0127 02:53:31.341087 3796111 host.go:66] Checking if "old-k8s-version-949994" exists ...
	I0127 02:53:31.341543 3796111 cli_runner.go:164] Run: docker container inspect old-k8s-version-949994 --format={{.State.Status}}
	I0127 02:53:31.343431 3796111 out.go:177] * Verifying Kubernetes components...
	I0127 02:53:31.346602 3796111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:53:31.365269 3796111 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-949994"
	W0127 02:53:31.365298 3796111 addons.go:247] addon default-storageclass should already be in state true
	I0127 02:53:31.365324 3796111 host.go:66] Checking if "old-k8s-version-949994" exists ...
	I0127 02:53:31.365738 3796111 cli_runner.go:164] Run: docker container inspect old-k8s-version-949994 --format={{.State.Status}}
	I0127 02:53:31.382665 3796111 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:53:31.390733 3796111 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:53:31.390758 3796111 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 02:53:31.390827 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:31.393897 3796111 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 02:53:31.397308 3796111 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 02:53:31.400114 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 02:53:31.400145 3796111 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 02:53:31.400216 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:31.427437 3796111 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 02:53:31.434263 3796111 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 02:53:31.434299 3796111 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 02:53:31.434373 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:31.438071 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:31.446895 3796111 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 02:53:31.446922 3796111 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 02:53:31.446988 3796111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-949994
	I0127 02:53:31.466935 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:31.526279 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:31.542056 3796111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:53:31.555502 3796111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37781 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/old-k8s-version-949994/id_rsa Username:docker}
	I0127 02:53:31.560232 3796111 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-949994" to be "Ready" ...
	I0127 02:53:31.572507 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:53:31.608936 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 02:53:31.609017 3796111 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 02:53:31.629026 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 02:53:31.629050 3796111 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 02:53:31.664349 3796111 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 02:53:31.664423 3796111 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 02:53:31.667195 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 02:53:31.667263 3796111 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 02:53:31.693111 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:53:31.734174 3796111 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 02:53:31.734254 3796111 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 02:53:31.769743 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 02:53:31.769816 3796111 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0127 02:53:31.783586 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:31.783685 3796111 retry.go:31] will retry after 308.903671ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:31.792548 3796111 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 02:53:31.792621 3796111 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 02:53:31.843833 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 02:53:31.851425 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 02:53:31.851447 3796111 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0127 02:53:31.877942 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:31.877973 3796111 retry.go:31] will retry after 264.460508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:31.906521 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 02:53:31.906543 3796111 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 02:53:31.939597 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 02:53:31.939619 3796111 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 02:53:31.962177 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 02:53:31.962199 3796111 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0127 02:53:31.984859 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:31.984891 3796111 retry.go:31] will retry after 362.645979ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:31.987802 3796111 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 02:53:31.987826 3796111 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 02:53:32.012949 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 02:53:32.093065 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 02:53:32.125400 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.125435 3796111 retry.go:31] will retry after 161.941326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.142797 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 02:53:32.250442 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.250473 3796111 retry.go:31] will retry after 250.979803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:32.256566 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.256594 3796111 retry.go:31] will retry after 260.913733ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.287911 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 02:53:32.348289 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 02:53:32.400967 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.400997 3796111 retry.go:31] will retry after 462.862077ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:32.474043 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.474073 3796111 retry.go:31] will retry after 396.89833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.502422 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:53:32.517694 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 02:53:32.629080 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.629110 3796111 retry.go:31] will retry after 569.100826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:32.675455 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.675489 3796111 retry.go:31] will retry after 806.341729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:32.864595 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 02:53:32.874474 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 02:53:33.037119 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.037152 3796111 retry.go:31] will retry after 740.905547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:33.037187 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.037194 3796111 retry.go:31] will retry after 500.746469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.199093 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 02:53:33.304641 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.304672 3796111 retry.go:31] will retry after 1.040611531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.482460 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:53:33.538412 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 02:53:33.561019 3796111 node_ready.go:53] error getting node "old-k8s-version-949994": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-949994": dial tcp 192.168.76.2:8443: connect: connection refused
	W0127 02:53:33.630898 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.630930 3796111 retry.go:31] will retry after 609.259794ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:33.762337 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.762374 3796111 retry.go:31] will retry after 906.647338ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.778774 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 02:53:33.926685 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:33.926720 3796111 retry.go:31] will retry after 781.619075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:34.240642 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 02:53:34.342660 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:34.342693 3796111 retry.go:31] will retry after 1.847735819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:34.345941 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 02:53:34.440212 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:34.440240 3796111 retry.go:31] will retry after 1.237695708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:34.670356 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 02:53:34.708739 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 02:53:34.899182 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:34.899213 3796111 retry.go:31] will retry after 763.822356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:34.930776 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:34.930807 3796111 retry.go:31] will retry after 1.541249452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:35.561437 3796111 node_ready.go:53] error getting node "old-k8s-version-949994": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-949994": dial tcp 192.168.76.2:8443: connect: connection refused
	I0127 02:53:35.663651 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 02:53:35.678986 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 02:53:35.889238 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:35.889268 3796111 retry.go:31] will retry after 1.945377799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:35.926329 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:35.926360 3796111 retry.go:31] will retry after 2.550128111s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:36.191351 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 02:53:36.320168 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:36.320197 3796111 retry.go:31] will retry after 2.184810208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:36.472595 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 02:53:36.614569 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:36.614599 3796111 retry.go:31] will retry after 2.043266532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:37.561517 3796111 node_ready.go:53] error getting node "old-k8s-version-949994": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-949994": dial tcp 192.168.76.2:8443: connect: connection refused
	I0127 02:53:37.835005 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 02:53:37.946261 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:37.946294 3796111 retry.go:31] will retry after 2.215585943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:38.476693 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:53:38.505989 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:53:38.658896 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 02:53:38.675666 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:38.675697 3796111 retry.go:31] will retry after 2.63789832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:38.746567 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:38.746661 3796111 retry.go:31] will retry after 2.641720268s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 02:53:38.850214 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:38.850297 3796111 retry.go:31] will retry after 3.339056224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:40.060814 3796111 node_ready.go:53] error getting node "old-k8s-version-949994": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-949994": dial tcp 192.168.76.2:8443: connect: connection refused
	I0127 02:53:40.162175 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 02:53:40.330399 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:40.330475 3796111 retry.go:31] will retry after 2.397678788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 02:53:41.313795 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:53:41.389354 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:53:42.190467 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 02:53:42.729219 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 02:53:51.062368 3796111 node_ready.go:53] error getting node "old-k8s-version-949994": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-949994": net/http: TLS handshake timeout
	I0127 02:53:52.142409 3796111 node_ready.go:49] node "old-k8s-version-949994" has status "Ready":"True"
	I0127 02:53:52.142433 3796111 node_ready.go:38] duration metric: took 20.582168026s for node "old-k8s-version-949994" to be "Ready" ...
	I0127 02:53:52.142444 3796111 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:53:52.394940 3796111 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-fbwzt" in "kube-system" namespace to be "Ready" ...
	I0127 02:53:52.605554 3796111 pod_ready.go:93] pod "coredns-74ff55c5b-fbwzt" in "kube-system" namespace has status "Ready":"True"
	I0127 02:53:52.605625 3796111 pod_ready.go:82] duration metric: took 210.595986ms for pod "coredns-74ff55c5b-fbwzt" in "kube-system" namespace to be "Ready" ...
	I0127 02:53:52.605652 3796111 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:53:52.694592 3796111 pod_ready.go:93] pod "etcd-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"True"
	I0127 02:53:52.694673 3796111 pod_ready.go:82] duration metric: took 88.998503ms for pod "etcd-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:53:52.694703 3796111 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:53:52.820844 3796111 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"True"
	I0127 02:53:52.820921 3796111 pod_ready.go:82] duration metric: took 126.181859ms for pod "kube-apiserver-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:53:52.820948 3796111 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:53:54.406918 3796111 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (13.017478568s)
	W0127 02:53:54.407000 3796111 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0127 02:53:54.407031 3796111 retry.go:31] will retry after 3.302396858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0127 02:53:54.411107 3796111 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (13.09721629s)
	I0127 02:53:54.868785 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:53:55.237374 3796111 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (13.0468605s)
	I0127 02:53:55.237716 3796111 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.508468132s)
	I0127 02:53:55.237781 3796111 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-949994"
	I0127 02:53:55.242785 3796111 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-949994 addons enable metrics-server
	
	I0127 02:53:57.328983 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:53:57.710619 3796111 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:53:58.270618 3796111 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0127 02:53:58.285945 3796111 addons.go:514] duration metric: took 26.950486407s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0127 02:53:59.832536 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:02.328048 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:04.827595 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:06.828080 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:09.327450 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:11.328233 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:13.331340 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:15.828464 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:17.828684 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:19.829017 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:21.832959 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:24.327508 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:26.827620 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:28.827830 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:30.827951 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:33.327618 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:35.828124 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:38.328076 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:40.827345 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:43.327961 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:45.828967 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:48.327775 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:50.328107 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:52.829898 3796111 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:54.827748 3796111 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"True"
	I0127 02:54:54.827777 3796111 pod_ready.go:82] duration metric: took 1m2.006792121s for pod "kube-controller-manager-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:54:54.827791 3796111 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5hzlg" in "kube-system" namespace to be "Ready" ...
	I0127 02:54:54.833179 3796111 pod_ready.go:93] pod "kube-proxy-5hzlg" in "kube-system" namespace has status "Ready":"True"
	I0127 02:54:54.833208 3796111 pod_ready.go:82] duration metric: took 5.408281ms for pod "kube-proxy-5hzlg" in "kube-system" namespace to be "Ready" ...
	I0127 02:54:54.833222 3796111 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:54:56.839308 3796111 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:54:58.840171 3796111 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:01.340251 3796111 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:03.839676 3796111 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:05.839918 3796111 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:08.340819 3796111 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:10.839499 3796111 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:12.841075 3796111 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace has status "Ready":"True"
	I0127 02:55:12.841104 3796111 pod_ready.go:82] duration metric: took 18.007873373s for pod "kube-scheduler-old-k8s-version-949994" in "kube-system" namespace to be "Ready" ...
	I0127 02:55:12.841118 3796111 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace to be "Ready" ...
	I0127 02:55:14.846761 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:16.848015 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:19.347166 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:21.350199 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:23.847883 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:25.847949 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:27.848010 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:30.348323 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:32.368715 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:34.848348 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:37.347130 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:39.847094 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:42.347519 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:44.848181 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:47.351581 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:49.849323 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:52.347834 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:54.348818 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:56.847564 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:55:58.848128 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:01.348494 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:03.847102 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:05.847596 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:07.847816 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:10.351672 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:12.847861 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:15.346895 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:17.347600 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:19.347737 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:21.847596 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:24.347987 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:26.348203 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:28.348456 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:30.348806 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:32.888161 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:35.348735 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:37.352293 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:39.847523 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:42.348688 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:44.847833 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:47.347201 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:49.351533 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:51.847752 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:53.847835 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:55.848108 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:56:57.848910 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:00.348250 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:02.847479 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:04.848693 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:07.347265 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:09.347525 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:11.348241 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:13.848063 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:16.347935 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:18.847592 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:21.347680 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:23.847407 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:26.347328 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:28.348648 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:30.847926 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:33.404035 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:35.848377 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:38.348244 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:40.365273 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:42.847561 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:45.347070 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:47.347580 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:49.348552 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:51.846676 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:53.848183 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:56.347696 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:57:58.847150 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:00.847916 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:03.347230 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:05.847528 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:08.349376 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:10.847628 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:12.848683 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:15.347418 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:17.348210 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:19.846925 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:22.347260 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:24.347682 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:26.848289 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:29.347499 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:31.347918 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:33.848640 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:36.348209 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:38.847716 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:41.347695 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:43.848243 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:46.350037 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:48.852359 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:51.348682 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:53.353671 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:55.850273 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:58.347216 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:00.348519 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:02.850554 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:04.869664 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:07.350731 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:09.849140 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:12.351772 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:12.849550 3796111 pod_ready.go:82] duration metric: took 4m0.00841246s for pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace to be "Ready" ...
	E0127 02:59:12.849575 3796111 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 02:59:12.849585 3796111 pod_ready.go:39] duration metric: took 5m20.707129361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:59:12.849601 3796111 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:59:12.849632 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 02:59:12.849779 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 02:59:12.908698 3796111 cri.go:89] found id: "f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:12.908718 3796111 cri.go:89] found id: "ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:12.908724 3796111 cri.go:89] found id: ""
	I0127 02:59:12.908731 3796111 logs.go:282] 2 containers: [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810]
	I0127 02:59:12.908789 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.912398 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.915700 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 02:59:12.915779 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 02:59:12.958472 3796111 cri.go:89] found id: "913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:12.958491 3796111 cri.go:89] found id: "8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:12.958495 3796111 cri.go:89] found id: ""
	I0127 02:59:12.958502 3796111 logs.go:282] 2 containers: [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9]
	I0127 02:59:12.958559 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.962269 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.965683 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 02:59:12.965751 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 02:59:13.017093 3796111 cri.go:89] found id: "80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:13.017166 3796111 cri.go:89] found id: "2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:13.017201 3796111 cri.go:89] found id: ""
	I0127 02:59:13.017228 3796111 logs.go:282] 2 containers: [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca]
	I0127 02:59:13.017327 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.021516 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.025341 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 02:59:13.025409 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 02:59:13.080460 3796111 cri.go:89] found id: "6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:13.080485 3796111 cri.go:89] found id: "d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:13.080493 3796111 cri.go:89] found id: ""
	I0127 02:59:13.080502 3796111 logs.go:282] 2 containers: [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3]
	I0127 02:59:13.080571 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.084534 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.088803 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 02:59:13.088877 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 02:59:13.158618 3796111 cri.go:89] found id: "1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:13.158644 3796111 cri.go:89] found id: "a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:13.158650 3796111 cri.go:89] found id: ""
	I0127 02:59:13.158658 3796111 logs.go:282] 2 containers: [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2]
	I0127 02:59:13.158745 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.163024 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.169387 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 02:59:13.169529 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 02:59:13.244336 3796111 cri.go:89] found id: "60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:13.244366 3796111 cri.go:89] found id: "0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:13.244375 3796111 cri.go:89] found id: ""
	I0127 02:59:13.244386 3796111 logs.go:282] 2 containers: [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414]
	I0127 02:59:13.244469 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.248667 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.252725 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 02:59:13.252803 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 02:59:13.300743 3796111 cri.go:89] found id: "792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:13.300767 3796111 cri.go:89] found id: "17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:13.300773 3796111 cri.go:89] found id: ""
	I0127 02:59:13.300781 3796111 logs.go:282] 2 containers: [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a]
	I0127 02:59:13.300838 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.305143 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.309056 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 02:59:13.309127 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 02:59:13.361247 3796111 cri.go:89] found id: "ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:13.361270 3796111 cri.go:89] found id: "5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:13.361278 3796111 cri.go:89] found id: ""
	I0127 02:59:13.361285 3796111 logs.go:282] 2 containers: [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21]
	I0127 02:59:13.361343 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.365558 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.369392 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 02:59:13.369489 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 02:59:13.420343 3796111 cri.go:89] found id: "3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:13.420366 3796111 cri.go:89] found id: ""
	I0127 02:59:13.420374 3796111 logs.go:282] 1 containers: [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8]
	I0127 02:59:13.420433 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.424585 3796111 logs.go:123] Gathering logs for coredns [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477] ...
	I0127 02:59:13.424611 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:13.478057 3796111 logs.go:123] Gathering logs for coredns [2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca] ...
	I0127 02:59:13.478086 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:13.536134 3796111 logs.go:123] Gathering logs for kube-proxy [a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2] ...
	I0127 02:59:13.536162 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:13.604664 3796111 logs.go:123] Gathering logs for kubernetes-dashboard [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8] ...
	I0127 02:59:13.604699 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:13.673791 3796111 logs.go:123] Gathering logs for describe nodes ...
	I0127 02:59:13.673820 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 02:59:13.864687 3796111 logs.go:123] Gathering logs for kube-scheduler [d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3] ...
	I0127 02:59:13.864722 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:13.936957 3796111 logs.go:123] Gathering logs for kube-controller-manager [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e] ...
	I0127 02:59:13.936988 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:14.024358 3796111 logs.go:123] Gathering logs for kube-apiserver [ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810] ...
	I0127 02:59:14.024397 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:14.103841 3796111 logs.go:123] Gathering logs for etcd [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b] ...
	I0127 02:59:14.103876 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:14.203352 3796111 logs.go:123] Gathering logs for etcd [8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9] ...
	I0127 02:59:14.203460 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:14.275290 3796111 logs.go:123] Gathering logs for kube-proxy [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754] ...
	I0127 02:59:14.275372 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:14.335204 3796111 logs.go:123] Gathering logs for kube-controller-manager [0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414] ...
	I0127 02:59:14.335232 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:14.451827 3796111 logs.go:123] Gathering logs for kindnet [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1] ...
	I0127 02:59:14.451917 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:14.528838 3796111 logs.go:123] Gathering logs for container status ...
	I0127 02:59:14.528919 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 02:59:14.607271 3796111 logs.go:123] Gathering logs for dmesg ...
	I0127 02:59:14.607428 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 02:59:14.629049 3796111 logs.go:123] Gathering logs for kube-apiserver [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26] ...
	I0127 02:59:14.629129 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:14.710645 3796111 logs.go:123] Gathering logs for kube-scheduler [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8] ...
	I0127 02:59:14.710736 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:14.765480 3796111 logs.go:123] Gathering logs for kindnet [17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a] ...
	I0127 02:59:14.765553 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:14.828838 3796111 logs.go:123] Gathering logs for storage-provisioner [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae] ...
	I0127 02:59:14.828906 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:14.888808 3796111 logs.go:123] Gathering logs for storage-provisioner [5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21] ...
	I0127 02:59:14.888835 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:14.940906 3796111 logs.go:123] Gathering logs for containerd ...
	I0127 02:59:14.940931 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 02:59:15.014873 3796111 logs.go:123] Gathering logs for kubelet ...
	I0127 02:59:15.014965 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 02:59:15.084568 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967126     660 reflector.go:138] object-"default"/"default-token-gqprk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gqprk" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.084862 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967200     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085108 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967258     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghd6s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghd6s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085384 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967336     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-6zk7s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-6zk7s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085625 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967387     660 reflector.go:138] object-"kube-system"/"coredns-token-l287g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l287g" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085873 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967447     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.086176 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967496     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-54qrt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-54qrt" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.095025 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:53 old-k8s-version-949994 kubelet[660]: E0127 02:53:53.876531     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.095319 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:54 old-k8s-version-949994 kubelet[660]: E0127 02:53:54.021725     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.099091 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:09 old-k8s-version-949994 kubelet[660]: E0127 02:54:09.521072     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.101668 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.278637     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.101911 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.504207     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.102301 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:23 old-k8s-version-949994 kubelet[660]: E0127 02:54:23.282663     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.102683 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:24 old-k8s-version-949994 kubelet[660]: E0127 02:54:24.284977     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.103161 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:26 old-k8s-version-949994 kubelet[660]: E0127 02:54:26.292140     660 pod_workers.go:191] Error syncing pod 2b0aa32b-1180-4a97-8374-d786d139dc2c ("storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"
	W0127 02:59:15.105983 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:37 old-k8s-version-949994 kubelet[660]: E0127 02:54:37.511569     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.106615 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:39 old-k8s-version-949994 kubelet[660]: E0127 02:54:39.335961     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.107194 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:42 old-k8s-version-949994 kubelet[660]: E0127 02:54:42.579255     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.107385 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:48 old-k8s-version-949994 kubelet[660]: E0127 02:54:48.501857     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.107709 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:53 old-k8s-version-949994 kubelet[660]: E0127 02:54:53.502843     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.107890 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:00 old-k8s-version-949994 kubelet[660]: E0127 02:55:00.501841     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.108480 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:07 old-k8s-version-949994 kubelet[660]: E0127 02:55:07.426639     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.108808 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:12 old-k8s-version-949994 kubelet[660]: E0127 02:55:12.579010     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.108989 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:13 old-k8s-version-949994 kubelet[660]: E0127 02:55:13.501799     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.111603 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:24 old-k8s-version-949994 kubelet[660]: E0127 02:55:24.528464     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.111990 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:25 old-k8s-version-949994 kubelet[660]: E0127 02:55:25.501540     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.112206 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:36 old-k8s-version-949994 kubelet[660]: E0127 02:55:36.501984     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.112600 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:37 old-k8s-version-949994 kubelet[660]: E0127 02:55:37.501584     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.112861 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:49 old-k8s-version-949994 kubelet[660]: E0127 02:55:49.505318     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.113505 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:51 old-k8s-version-949994 kubelet[660]: E0127 02:55:51.546741     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.113872 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:52 old-k8s-version-949994 kubelet[660]: E0127 02:55:52.578558     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.114103 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:03 old-k8s-version-949994 kubelet[660]: E0127 02:56:03.501771     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.114473 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:06 old-k8s-version-949994 kubelet[660]: E0127 02:56:06.501392     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.114689 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:14 old-k8s-version-949994 kubelet[660]: E0127 02:56:14.502372     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.115045 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:17 old-k8s-version-949994 kubelet[660]: E0127 02:56:17.501933     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.115268 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:26 old-k8s-version-949994 kubelet[660]: E0127 02:56:26.501683     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.115635 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:32 old-k8s-version-949994 kubelet[660]: E0127 02:56:32.501823     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.115859 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:40 old-k8s-version-949994 kubelet[660]: E0127 02:56:40.501634     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.116289 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:45 old-k8s-version-949994 kubelet[660]: E0127 02:56:45.502780     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.118822 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:55 old-k8s-version-949994 kubelet[660]: E0127 02:56:55.514866     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.119188 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:56 old-k8s-version-949994 kubelet[660]: E0127 02:56:56.501563     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.119541 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:07 old-k8s-version-949994 kubelet[660]: E0127 02:57:07.501947     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.119765 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:10 old-k8s-version-949994 kubelet[660]: E0127 02:57:10.502918     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.120388 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:21 old-k8s-version-949994 kubelet[660]: E0127 02:57:21.776454     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.120743 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:22 old-k8s-version-949994 kubelet[660]: E0127 02:57:22.780516     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.120978 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:23 old-k8s-version-949994 kubelet[660]: E0127 02:57:23.505620     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.121189 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:34 old-k8s-version-949994 kubelet[660]: E0127 02:57:34.501538     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.121545 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:35 old-k8s-version-949994 kubelet[660]: E0127 02:57:35.501481     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.121912 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:48 old-k8s-version-949994 kubelet[660]: E0127 02:57:48.503510     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.122150 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:49 old-k8s-version-949994 kubelet[660]: E0127 02:57:49.501969     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.122520 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:59 old-k8s-version-949994 kubelet[660]: E0127 02:57:59.502983     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.122734 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:00 old-k8s-version-949994 kubelet[660]: E0127 02:58:00.501586     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.123100 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:10 old-k8s-version-949994 kubelet[660]: E0127 02:58:10.501269     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.123321 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:15 old-k8s-version-949994 kubelet[660]: E0127 02:58:15.501889     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.123687 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:21 old-k8s-version-949994 kubelet[660]: E0127 02:58:21.501389     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.123897 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:28 old-k8s-version-949994 kubelet[660]: E0127 02:58:28.501733     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.124265 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:34 old-k8s-version-949994 kubelet[660]: E0127 02:58:34.501213     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.124475 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:43 old-k8s-version-949994 kubelet[660]: E0127 02:58:43.501567     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.124832 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.125042 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.125398 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.125620 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.125978 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:15.126008 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:15.126996 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 02:59:15.127126 3796111 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0127 02:59:15.127350 3796111 out.go:270]   Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	  Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.127379 3796111 out.go:270]   Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.127391 3796111 out.go:270]   Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	  Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.127397 3796111 out.go:270]   Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.127405 3796111 out.go:270]   Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	  Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:15.127424 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:15.127438 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:25.128192 3796111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:59:25.142509 3796111 api_server.go:72] duration metric: took 5m53.807437252s to wait for apiserver process to appear ...
	I0127 02:59:25.142534 3796111 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:59:25.142569 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 02:59:25.142630 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 02:59:25.218274 3796111 cri.go:89] found id: "f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:25.218294 3796111 cri.go:89] found id: "ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:25.218299 3796111 cri.go:89] found id: ""
	I0127 02:59:25.218306 3796111 logs.go:282] 2 containers: [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810]
	I0127 02:59:25.218366 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.223228 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.233535 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 02:59:25.233608 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 02:59:25.287247 3796111 cri.go:89] found id: "913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:25.287268 3796111 cri.go:89] found id: "8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:25.287273 3796111 cri.go:89] found id: ""
	I0127 02:59:25.287281 3796111 logs.go:282] 2 containers: [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9]
	I0127 02:59:25.287350 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.291869 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.296041 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 02:59:25.296114 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 02:59:25.353617 3796111 cri.go:89] found id: "80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:25.353636 3796111 cri.go:89] found id: "2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:25.353641 3796111 cri.go:89] found id: ""
	I0127 02:59:25.353648 3796111 logs.go:282] 2 containers: [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca]
	I0127 02:59:25.353712 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.358444 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.362671 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 02:59:25.362745 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 02:59:25.420248 3796111 cri.go:89] found id: "6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:25.420268 3796111 cri.go:89] found id: "d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:25.420273 3796111 cri.go:89] found id: ""
	I0127 02:59:25.420280 3796111 logs.go:282] 2 containers: [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3]
	I0127 02:59:25.420338 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.425743 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.432269 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 02:59:25.432340 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 02:59:25.493625 3796111 cri.go:89] found id: "1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:25.493696 3796111 cri.go:89] found id: "a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:25.493715 3796111 cri.go:89] found id: ""
	I0127 02:59:25.493738 3796111 logs.go:282] 2 containers: [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2]
	I0127 02:59:25.493833 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.499566 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.504443 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 02:59:25.504514 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 02:59:25.580657 3796111 cri.go:89] found id: "60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:25.580678 3796111 cri.go:89] found id: "0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:25.580683 3796111 cri.go:89] found id: ""
	I0127 02:59:25.580690 3796111 logs.go:282] 2 containers: [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414]
	I0127 02:59:25.580745 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.587524 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.592431 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 02:59:25.592584 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 02:59:25.652963 3796111 cri.go:89] found id: "792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:25.653037 3796111 cri.go:89] found id: "17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:25.653056 3796111 cri.go:89] found id: ""
	I0127 02:59:25.653080 3796111 logs.go:282] 2 containers: [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a]
	I0127 02:59:25.653174 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.658424 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.664336 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 02:59:25.664466 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 02:59:25.739132 3796111 cri.go:89] found id: "3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:25.739205 3796111 cri.go:89] found id: ""
	I0127 02:59:25.739229 3796111 logs.go:282] 1 containers: [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8]
	I0127 02:59:25.739320 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.743622 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 02:59:25.743746 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 02:59:25.846595 3796111 cri.go:89] found id: "ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:25.846658 3796111 cri.go:89] found id: "5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:25.846686 3796111 cri.go:89] found id: ""
	I0127 02:59:25.846705 3796111 logs.go:282] 2 containers: [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21]
	I0127 02:59:25.846798 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.851520 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.856633 3796111 logs.go:123] Gathering logs for kindnet [17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a] ...
	I0127 02:59:25.856708 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:25.916964 3796111 logs.go:123] Gathering logs for storage-provisioner [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae] ...
	I0127 02:59:25.917144 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:25.972487 3796111 logs.go:123] Gathering logs for containerd ...
	I0127 02:59:25.972512 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 02:59:26.054510 3796111 logs.go:123] Gathering logs for describe nodes ...
	I0127 02:59:26.054548 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 02:59:26.207339 3796111 logs.go:123] Gathering logs for kube-apiserver [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26] ...
	I0127 02:59:26.207370 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:26.265602 3796111 logs.go:123] Gathering logs for etcd [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b] ...
	I0127 02:59:26.265637 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:26.318707 3796111 logs.go:123] Gathering logs for kube-scheduler [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8] ...
	I0127 02:59:26.318739 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:26.361129 3796111 logs.go:123] Gathering logs for kube-proxy [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754] ...
	I0127 02:59:26.361156 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:26.413909 3796111 logs.go:123] Gathering logs for kube-controller-manager [0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414] ...
	I0127 02:59:26.413937 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:26.493497 3796111 logs.go:123] Gathering logs for kindnet [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1] ...
	I0127 02:59:26.493585 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:26.544762 3796111 logs.go:123] Gathering logs for kubernetes-dashboard [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8] ...
	I0127 02:59:26.544794 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:26.595658 3796111 logs.go:123] Gathering logs for dmesg ...
	I0127 02:59:26.595688 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 02:59:26.613129 3796111 logs.go:123] Gathering logs for kube-apiserver [ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810] ...
	I0127 02:59:26.613204 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:26.669414 3796111 logs.go:123] Gathering logs for coredns [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477] ...
	I0127 02:59:26.669490 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:26.717840 3796111 logs.go:123] Gathering logs for coredns [2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca] ...
	I0127 02:59:26.717869 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:26.757167 3796111 logs.go:123] Gathering logs for kube-proxy [a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2] ...
	I0127 02:59:26.757196 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:26.798051 3796111 logs.go:123] Gathering logs for container status ...
	I0127 02:59:26.798086 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 02:59:26.845207 3796111 logs.go:123] Gathering logs for kubelet ...
	I0127 02:59:26.845236 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 02:59:26.909018 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967126     660 reflector.go:138] object-"default"/"default-token-gqprk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gqprk" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909268 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967200     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909505 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967258     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghd6s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghd6s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909756 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967336     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-6zk7s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-6zk7s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909988 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967387     660 reflector.go:138] object-"kube-system"/"coredns-token-l287g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l287g" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.910228 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967447     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.910466 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967496     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-54qrt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-54qrt" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.918459 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:53 old-k8s-version-949994 kubelet[660]: E0127 02:53:53.876531     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.918698 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:54 old-k8s-version-949994 kubelet[660]: E0127 02:53:54.021725     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.925130 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:09 old-k8s-version-949994 kubelet[660]: E0127 02:54:09.521072     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.927635 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.278637     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.927828 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.504207     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.928157 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:23 old-k8s-version-949994 kubelet[660]: E0127 02:54:23.282663     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.928494 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:24 old-k8s-version-949994 kubelet[660]: E0127 02:54:24.284977     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.929099 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:26 old-k8s-version-949994 kubelet[660]: E0127 02:54:26.292140     660 pod_workers.go:191] Error syncing pod 2b0aa32b-1180-4a97-8374-d786d139dc2c ("storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"
	W0127 02:59:26.932158 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:37 old-k8s-version-949994 kubelet[660]: E0127 02:54:37.511569     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.932846 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:39 old-k8s-version-949994 kubelet[660]: E0127 02:54:39.335961     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.933330 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:42 old-k8s-version-949994 kubelet[660]: E0127 02:54:42.579255     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.933538 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:48 old-k8s-version-949994 kubelet[660]: E0127 02:54:48.501857     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.933886 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:53 old-k8s-version-949994 kubelet[660]: E0127 02:54:53.502843     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.934090 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:00 old-k8s-version-949994 kubelet[660]: E0127 02:55:00.501841     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.934727 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:07 old-k8s-version-949994 kubelet[660]: E0127 02:55:07.426639     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.935133 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:12 old-k8s-version-949994 kubelet[660]: E0127 02:55:12.579010     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.935339 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:13 old-k8s-version-949994 kubelet[660]: E0127 02:55:13.501799     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.938047 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:24 old-k8s-version-949994 kubelet[660]: E0127 02:55:24.528464     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.938415 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:25 old-k8s-version-949994 kubelet[660]: E0127 02:55:25.501540     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.938630 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:36 old-k8s-version-949994 kubelet[660]: E0127 02:55:36.501984     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.938994 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:37 old-k8s-version-949994 kubelet[660]: E0127 02:55:37.501584     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.939204 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:49 old-k8s-version-949994 kubelet[660]: E0127 02:55:49.505318     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.939825 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:51 old-k8s-version-949994 kubelet[660]: E0127 02:55:51.546741     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.940171 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:52 old-k8s-version-949994 kubelet[660]: E0127 02:55:52.578558     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.940412 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:03 old-k8s-version-949994 kubelet[660]: E0127 02:56:03.501771     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.940764 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:06 old-k8s-version-949994 kubelet[660]: E0127 02:56:06.501392     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.940981 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:14 old-k8s-version-949994 kubelet[660]: E0127 02:56:14.502372     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.941335 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:17 old-k8s-version-949994 kubelet[660]: E0127 02:56:17.501933     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.941568 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:26 old-k8s-version-949994 kubelet[660]: E0127 02:56:26.501683     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.941996 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:32 old-k8s-version-949994 kubelet[660]: E0127 02:56:32.501823     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.942196 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:40 old-k8s-version-949994 kubelet[660]: E0127 02:56:40.501634     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.942555 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:45 old-k8s-version-949994 kubelet[660]: E0127 02:56:45.502780     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.945106 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:55 old-k8s-version-949994 kubelet[660]: E0127 02:56:55.514866     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.945463 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:56 old-k8s-version-949994 kubelet[660]: E0127 02:56:56.501563     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.945853 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:07 old-k8s-version-949994 kubelet[660]: E0127 02:57:07.501947     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.946094 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:10 old-k8s-version-949994 kubelet[660]: E0127 02:57:10.502918     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.946801 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:21 old-k8s-version-949994 kubelet[660]: E0127 02:57:21.776454     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.947158 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:22 old-k8s-version-949994 kubelet[660]: E0127 02:57:22.780516     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.947364 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:23 old-k8s-version-949994 kubelet[660]: E0127 02:57:23.505620     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.947572 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:34 old-k8s-version-949994 kubelet[660]: E0127 02:57:34.501538     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.947920 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:35 old-k8s-version-949994 kubelet[660]: E0127 02:57:35.501481     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.948269 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:48 old-k8s-version-949994 kubelet[660]: E0127 02:57:48.503510     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.948518 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:49 old-k8s-version-949994 kubelet[660]: E0127 02:57:49.501969     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.949022 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:59 old-k8s-version-949994 kubelet[660]: E0127 02:57:59.502983     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.949233 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:00 old-k8s-version-949994 kubelet[660]: E0127 02:58:00.501586     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.949595 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:10 old-k8s-version-949994 kubelet[660]: E0127 02:58:10.501269     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.949802 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:15 old-k8s-version-949994 kubelet[660]: E0127 02:58:15.501889     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.950158 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:21 old-k8s-version-949994 kubelet[660]: E0127 02:58:21.501389     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.950395 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:28 old-k8s-version-949994 kubelet[660]: E0127 02:58:28.501733     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.950751 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:34 old-k8s-version-949994 kubelet[660]: E0127 02:58:34.501213     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.950961 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:43 old-k8s-version-949994 kubelet[660]: E0127 02:58:43.501567     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.951309 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.951528 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.951914 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.952124 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.952532 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.953020 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:18 old-k8s-version-949994 kubelet[660]: E0127 02:59:18.502650     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.953379 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:26 old-k8s-version-949994 kubelet[660]: E0127 02:59:26.501368     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:26.953395 3796111 logs.go:123] Gathering logs for etcd [8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9] ...
	I0127 02:59:26.953420 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:27.014044 3796111 logs.go:123] Gathering logs for kube-scheduler [d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3] ...
	I0127 02:59:27.014093 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:27.059697 3796111 logs.go:123] Gathering logs for kube-controller-manager [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e] ...
	I0127 02:59:27.059730 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:27.129429 3796111 logs.go:123] Gathering logs for storage-provisioner [5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21] ...
	I0127 02:59:27.129470 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:27.169921 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:27.169952 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 02:59:27.170000 3796111 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0127 02:59:27.170013 3796111 out.go:270]   Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	  Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:27.170021 3796111 out.go:270]   Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:27.170035 3796111 out.go:270]   Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	  Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:27.170041 3796111 out.go:270]   Jan 27 02:59:18 old-k8s-version-949994 kubelet[660]: E0127 02:59:18.502650     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 02:59:18 old-k8s-version-949994 kubelet[660]: E0127 02:59:18.502650     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:27.170051 3796111 out.go:270]   Jan 27 02:59:26 old-k8s-version-949994 kubelet[660]: E0127 02:59:26.501368     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	  Jan 27 02:59:26 old-k8s-version-949994 kubelet[660]: E0127 02:59:26.501368     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:27.170057 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:27.170066 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:37.172121 3796111 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 02:59:37.181232 3796111 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 02:59:37.184443 3796111 out.go:201] 
	W0127 02:59:37.187360 3796111 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0127 02:59:37.187406 3796111 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0127 02:59:37.187429 3796111 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0127 02:59:37.187435 3796111 out.go:270] * 
	* 
	W0127 02:59:37.188343 3796111 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 02:59:37.192140 3796111 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-949994 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-949994
helpers_test.go:235: (dbg) docker inspect old-k8s-version-949994:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54e1e12125451246b331facfd52770079e3feae8cf83690d92a1247929e10347",
	        "Created": "2025-01-27T02:50:10.877395828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3796311,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T02:53:23.700578891Z",
	            "FinishedAt": "2025-01-27T02:53:22.786534989Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/54e1e12125451246b331facfd52770079e3feae8cf83690d92a1247929e10347/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54e1e12125451246b331facfd52770079e3feae8cf83690d92a1247929e10347/hostname",
	        "HostsPath": "/var/lib/docker/containers/54e1e12125451246b331facfd52770079e3feae8cf83690d92a1247929e10347/hosts",
	        "LogPath": "/var/lib/docker/containers/54e1e12125451246b331facfd52770079e3feae8cf83690d92a1247929e10347/54e1e12125451246b331facfd52770079e3feae8cf83690d92a1247929e10347-json.log",
	        "Name": "/old-k8s-version-949994",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-949994:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-949994",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/74ea1f8c26d587233924644a4ddd909a62628df16b47c7de36614594b527a0d8-init/diff:/var/lib/docker/overlay2/5296668a0a30b38feb9159e191c47d5587ed9f36bb9a48e894c12f88095e8aab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/74ea1f8c26d587233924644a4ddd909a62628df16b47c7de36614594b527a0d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/74ea1f8c26d587233924644a4ddd909a62628df16b47c7de36614594b527a0d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/74ea1f8c26d587233924644a4ddd909a62628df16b47c7de36614594b527a0d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-949994",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-949994/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-949994",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-949994",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-949994",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "88a2f5ccb4f555a9bf7faed4d9bec52173596e2820bd4350a46e9ef61a352e51",
	            "SandboxKey": "/var/run/docker/netns/88a2f5ccb4f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37785"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37783"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-949994": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "782c8980703d704b27a09304a8a1fc23fa40e46d0d2e7c713bd35610e7868c27",
	                    "EndpointID": "10e1adc144cd2ebcf168d0e27ea747d8bd31fd3c08d1b3feaa0edc1d884fbcfc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-949994",
	                        "54e1e1212545"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-949994 -n old-k8s-version-949994
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-949994 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-949994 logs -n 25: (2.079152038s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-458112                           | force-systemd-flag-458112 | jenkins | v1.35.0 | 27 Jan 25 02:48 UTC | 27 Jan 25 02:49 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-458112                              | force-systemd-flag-458112 | jenkins | v1.35.0 | 27 Jan 25 02:49 UTC | 27 Jan 25 02:49 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-458112                           | force-systemd-flag-458112 | jenkins | v1.35.0 | 27 Jan 25 02:49 UTC | 27 Jan 25 02:49 UTC |
	| start   | -p cert-options-703948                                 | cert-options-703948       | jenkins | v1.35.0 | 27 Jan 25 02:49 UTC | 27 Jan 25 02:49 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-703948 ssh                                | cert-options-703948       | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:50 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-703948 -- sudo                         | cert-options-703948       | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:50 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-703948                                 | cert-options-703948       | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:50 UTC |
	| start   | -p old-k8s-version-949994                              | old-k8s-version-949994    | jenkins | v1.35.0 | 27 Jan 25 02:50 UTC | 27 Jan 25 02:53 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-393434                              | cert-expiration-393434    | jenkins | v1.35.0 | 27 Jan 25 02:51 UTC | 27 Jan 25 02:51 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-393434                              | cert-expiration-393434    | jenkins | v1.35.0 | 27 Jan 25 02:51 UTC | 27 Jan 25 02:51 UTC |
	| start   | -p no-preload-715478                                   | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:51 UTC | 27 Jan 25 02:53 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-949994        | old-k8s-version-949994    | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-949994                              | old-k8s-version-949994    | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-715478             | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-715478                                   | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-949994             | old-k8s-version-949994    | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-949994                              | old-k8s-version-949994    | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-715478                  | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-715478                                   | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:53 UTC | 27 Jan 25 02:58 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	| image   | no-preload-715478 image list                           | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:58 UTC | 27 Jan 25 02:58 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-715478                                   | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:58 UTC | 27 Jan 25 02:58 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p no-preload-715478                                   | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:58 UTC | 27 Jan 25 02:58 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p no-preload-715478                                   | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:58 UTC | 27 Jan 25 02:58 UTC |
	| delete  | -p no-preload-715478                                   | no-preload-715478         | jenkins | v1.35.0 | 27 Jan 25 02:58 UTC | 27 Jan 25 02:58 UTC |
	| start   | -p embed-certs-579827                                  | embed-certs-579827        | jenkins | v1.35.0 | 27 Jan 25 02:58 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 02:58:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:58:49.402146 3805055 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:58:49.403069 3805055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:58:49.403107 3805055 out.go:358] Setting ErrFile to fd 2...
	I0127 02:58:49.403127 3805055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:58:49.403410 3805055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:58:49.403871 3805055 out.go:352] Setting JSON to false
	I0127 02:58:49.404967 3805055 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":92473,"bootTime":1737854256,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:58:49.405083 3805055 start.go:139] virtualization:  
	I0127 02:58:49.409122 3805055 out.go:177] * [embed-certs-579827] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 02:58:49.413412 3805055 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:58:49.413569 3805055 notify.go:220] Checking for updates...
	I0127 02:58:49.419653 3805055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:58:49.422795 3805055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:58:49.425913 3805055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:58:49.428976 3805055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 02:58:49.432023 3805055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:58:49.435747 3805055 config.go:182] Loaded profile config "old-k8s-version-949994": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 02:58:49.435901 3805055 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:58:49.462644 3805055 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:58:49.462797 3805055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:58:49.537640 3805055 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 02:58:49.528121369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:58:49.537755 3805055 docker.go:318] overlay module found
	I0127 02:58:49.540968 3805055 out.go:177] * Using the docker driver based on user configuration
	I0127 02:58:49.543886 3805055 start.go:297] selected driver: docker
	I0127 02:58:49.543909 3805055 start.go:901] validating driver "docker" against <nil>
	I0127 02:58:49.543924 3805055 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:58:49.544696 3805055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:58:49.602882 3805055 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 02:58:49.593180143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:58:49.603083 3805055 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 02:58:49.603367 3805055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 02:58:49.606301 3805055 out.go:177] * Using Docker driver with root privileges
	I0127 02:58:49.609175 3805055 cni.go:84] Creating CNI manager for ""
	I0127 02:58:49.609237 3805055 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:58:49.609251 3805055 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 02:58:49.609327 3805055 start.go:340] cluster config:
	{Name:embed-certs-579827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-579827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:58:49.612479 3805055 out.go:177] * Starting "embed-certs-579827" primary control-plane node in "embed-certs-579827" cluster
	I0127 02:58:49.615367 3805055 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 02:58:49.618343 3805055 out.go:177] * Pulling base image v0.0.46 ...
	I0127 02:58:49.621156 3805055 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:58:49.621227 3805055 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 02:58:49.621246 3805055 cache.go:56] Caching tarball of preloaded images
	I0127 02:58:49.621248 3805055 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 02:58:49.621331 3805055 preload.go:172] Found /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0127 02:58:49.621342 3805055 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 02:58:49.621456 3805055 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/config.json ...
	I0127 02:58:49.621478 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/config.json: {Name:mkb6a7540171776d100e263c2d77dd2c905babfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:58:49.641524 3805055 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 02:58:49.641548 3805055 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 02:58:49.641566 3805055 cache.go:230] Successfully downloaded all kic artifacts
	I0127 02:58:49.641596 3805055 start.go:360] acquireMachinesLock for embed-certs-579827: {Name:mkea4cc5245d0e9a87d703d4a70fce362b1dba11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 02:58:49.641704 3805055 start.go:364] duration metric: took 86.357µs to acquireMachinesLock for "embed-certs-579827"
	I0127 02:58:49.641735 3805055 start.go:93] Provisioning new machine with config: &{Name:embed-certs-579827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-579827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 02:58:49.641827 3805055 start.go:125] createHost starting for "" (driver="docker")
	I0127 02:58:48.852359 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:51.348682 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:49.645462 3805055 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0127 02:58:49.645722 3805055 start.go:159] libmachine.API.Create for "embed-certs-579827" (driver="docker")
	I0127 02:58:49.645753 3805055 client.go:168] LocalClient.Create starting
	I0127 02:58:49.645830 3805055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem
	I0127 02:58:49.645872 3805055 main.go:141] libmachine: Decoding PEM data...
	I0127 02:58:49.645891 3805055 main.go:141] libmachine: Parsing certificate...
	I0127 02:58:49.645955 3805055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem
	I0127 02:58:49.645985 3805055 main.go:141] libmachine: Decoding PEM data...
	I0127 02:58:49.645998 3805055 main.go:141] libmachine: Parsing certificate...
	I0127 02:58:49.646402 3805055 cli_runner.go:164] Run: docker network inspect embed-certs-579827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 02:58:49.662863 3805055 cli_runner.go:211] docker network inspect embed-certs-579827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 02:58:49.663091 3805055 network_create.go:284] running [docker network inspect embed-certs-579827] to gather additional debugging logs...
	I0127 02:58:49.663118 3805055 cli_runner.go:164] Run: docker network inspect embed-certs-579827
	W0127 02:58:49.678969 3805055 cli_runner.go:211] docker network inspect embed-certs-579827 returned with exit code 1
	I0127 02:58:49.679002 3805055 network_create.go:287] error running [docker network inspect embed-certs-579827]: docker network inspect embed-certs-579827: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-579827 not found
	I0127 02:58:49.679015 3805055 network_create.go:289] output of [docker network inspect embed-certs-579827]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-579827 not found
	
	** /stderr **
	I0127 02:58:49.679117 3805055 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 02:58:49.697421 3805055 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20c6b9faf740 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a5:84:e8:b3} reservation:<nil>}
	I0127 02:58:49.697860 3805055 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ed55a6afcd29 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ae:45:09:f0} reservation:<nil>}
	I0127 02:58:49.698415 3805055 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6d1bfb053f15 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:0f:00:a9:30} reservation:<nil>}
	I0127 02:58:49.698769 3805055 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-782c8980703d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:90:1f:74:aa} reservation:<nil>}
	I0127 02:58:49.699327 3805055 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f6420}
	I0127 02:58:49.699353 3805055 network_create.go:124] attempt to create docker network embed-certs-579827 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0127 02:58:49.699414 3805055 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-579827 embed-certs-579827
	I0127 02:58:49.775654 3805055 network_create.go:108] docker network embed-certs-579827 192.168.85.0/24 created
	I0127 02:58:49.775686 3805055 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-579827" container
	I0127 02:58:49.775798 3805055 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 02:58:49.792993 3805055 cli_runner.go:164] Run: docker volume create embed-certs-579827 --label name.minikube.sigs.k8s.io=embed-certs-579827 --label created_by.minikube.sigs.k8s.io=true
	I0127 02:58:49.812010 3805055 oci.go:103] Successfully created a docker volume embed-certs-579827
	I0127 02:58:49.812097 3805055 cli_runner.go:164] Run: docker run --rm --name embed-certs-579827-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-579827 --entrypoint /usr/bin/test -v embed-certs-579827:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 02:58:50.499717 3805055 oci.go:107] Successfully prepared a docker volume embed-certs-579827
	I0127 02:58:50.499776 3805055 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:58:50.499795 3805055 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 02:58:50.499878 3805055 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-579827:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 02:58:53.353671 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:55.850273 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:58:55.525092 3805055 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-579827:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (5.025154755s)
	I0127 02:58:55.525123 3805055 kic.go:203] duration metric: took 5.02532454s to extract preloaded images to volume ...
	W0127 02:58:55.525268 3805055 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 02:58:55.525388 3805055 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 02:58:55.582219 3805055 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-579827 --name embed-certs-579827 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-579827 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-579827 --network embed-certs-579827 --ip 192.168.85.2 --volume embed-certs-579827:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 02:58:55.930220 3805055 cli_runner.go:164] Run: docker container inspect embed-certs-579827 --format={{.State.Running}}
	I0127 02:58:55.952515 3805055 cli_runner.go:164] Run: docker container inspect embed-certs-579827 --format={{.State.Status}}
	I0127 02:58:55.977592 3805055 cli_runner.go:164] Run: docker exec embed-certs-579827 stat /var/lib/dpkg/alternatives/iptables
	I0127 02:58:56.031629 3805055 oci.go:144] the created container "embed-certs-579827" has a running status.
	I0127 02:58:56.031657 3805055 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa...
	I0127 02:58:56.268746 3805055 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 02:58:56.297078 3805055 cli_runner.go:164] Run: docker container inspect embed-certs-579827 --format={{.State.Status}}
	I0127 02:58:56.326079 3805055 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 02:58:56.326973 3805055 kic_runner.go:114] Args: [docker exec --privileged embed-certs-579827 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 02:58:56.390682 3805055 cli_runner.go:164] Run: docker container inspect embed-certs-579827 --format={{.State.Status}}
	I0127 02:58:56.417904 3805055 machine.go:93] provisionDockerMachine start ...
	I0127 02:58:56.418069 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:58:56.456396 3805055 main.go:141] libmachine: Using SSH client type: native
	I0127 02:58:56.456709 3805055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37791 <nil> <nil>}
	I0127 02:58:56.456719 3805055 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 02:58:56.460932 3805055 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0127 02:58:59.593636 3805055 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-579827
	
	I0127 02:58:59.593662 3805055 ubuntu.go:169] provisioning hostname "embed-certs-579827"
	I0127 02:58:59.593746 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:58:59.612166 3805055 main.go:141] libmachine: Using SSH client type: native
	I0127 02:58:59.612419 3805055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37791 <nil> <nil>}
	I0127 02:58:59.612436 3805055 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-579827 && echo "embed-certs-579827" | sudo tee /etc/hostname
	I0127 02:58:59.747049 3805055 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-579827
	
	I0127 02:58:59.747201 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:58:59.764780 3805055 main.go:141] libmachine: Using SSH client type: native
	I0127 02:58:59.765031 3805055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 37791 <nil> <nil>}
	I0127 02:58:59.765048 3805055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-579827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-579827/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-579827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 02:58:59.886263 3805055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 02:58:59.886291 3805055 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20316-3581420/.minikube CaCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20316-3581420/.minikube}
	I0127 02:58:59.886330 3805055 ubuntu.go:177] setting up certificates
	I0127 02:58:59.886340 3805055 provision.go:84] configureAuth start
	I0127 02:58:59.886413 3805055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-579827
	I0127 02:58:59.909871 3805055 provision.go:143] copyHostCerts
	I0127 02:58:59.909950 3805055 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem, removing ...
	I0127 02:58:59.909964 3805055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem
	I0127 02:58:59.910046 3805055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.pem (1078 bytes)
	I0127 02:58:59.910187 3805055 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem, removing ...
	I0127 02:58:59.910201 3805055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem
	I0127 02:58:59.910231 3805055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/cert.pem (1123 bytes)
	I0127 02:58:59.910292 3805055 exec_runner.go:144] found /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem, removing ...
	I0127 02:58:59.910300 3805055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem
	I0127 02:58:59.910325 3805055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20316-3581420/.minikube/key.pem (1679 bytes)
	I0127 02:58:59.910379 3805055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem org=jenkins.embed-certs-579827 san=[127.0.0.1 192.168.85.2 embed-certs-579827 localhost minikube]
	I0127 02:59:00.273194 3805055 provision.go:177] copyRemoteCerts
	I0127 02:59:00.273298 3805055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 02:59:00.273364 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:59:00.294209 3805055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37791 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa Username:docker}
	I0127 02:59:00.390566 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 02:59:00.419486 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 02:59:00.446152 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 02:59:00.473260 3805055 provision.go:87] duration metric: took 586.904207ms to configureAuth
	I0127 02:59:00.473289 3805055 ubuntu.go:193] setting minikube options for container-runtime
	I0127 02:59:00.473478 3805055 config.go:182] Loaded profile config "embed-certs-579827": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:59:00.473494 3805055 machine.go:96] duration metric: took 4.055572379s to provisionDockerMachine
	I0127 02:59:00.473501 3805055 client.go:171] duration metric: took 10.827737248s to LocalClient.Create
	I0127 02:59:00.473515 3805055 start.go:167] duration metric: took 10.827794642s to libmachine.API.Create "embed-certs-579827"
	I0127 02:59:00.473523 3805055 start.go:293] postStartSetup for "embed-certs-579827" (driver="docker")
	I0127 02:59:00.473532 3805055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 02:59:00.473589 3805055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 02:59:00.473633 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:59:00.511827 3805055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37791 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa Username:docker}
	I0127 02:59:00.604896 3805055 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 02:59:00.608455 3805055 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 02:59:00.608494 3805055 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 02:59:00.608514 3805055 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 02:59:00.608522 3805055 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 02:59:00.608533 3805055 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-3581420/.minikube/addons for local assets ...
	I0127 02:59:00.608597 3805055 filesync.go:126] Scanning /home/jenkins/minikube-integration/20316-3581420/.minikube/files for local assets ...
	I0127 02:59:00.608678 3805055 filesync.go:149] local asset: /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem -> 35868002.pem in /etc/ssl/certs
	I0127 02:59:00.608791 3805055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 02:59:00.617888 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem --> /etc/ssl/certs/35868002.pem (1708 bytes)
	I0127 02:59:00.643868 3805055 start.go:296] duration metric: took 170.330321ms for postStartSetup
	I0127 02:59:00.644265 3805055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-579827
	I0127 02:59:00.661187 3805055 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/config.json ...
	I0127 02:59:00.661470 3805055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:59:00.661512 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:59:00.678615 3805055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37791 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa Username:docker}
	I0127 02:59:00.767241 3805055 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 02:59:00.771956 3805055 start.go:128] duration metric: took 11.130113931s to createHost
	I0127 02:59:00.771983 3805055 start.go:83] releasing machines lock for "embed-certs-579827", held for 11.130264821s
	I0127 02:59:00.772068 3805055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-579827
	I0127 02:59:00.789166 3805055 ssh_runner.go:195] Run: cat /version.json
	I0127 02:59:00.789223 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:59:00.789287 3805055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 02:59:00.789368 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:59:00.814142 3805055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37791 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa Username:docker}
	I0127 02:59:00.816047 3805055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37791 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa Username:docker}
	I0127 02:59:01.042940 3805055 ssh_runner.go:195] Run: systemctl --version
	I0127 02:59:01.047440 3805055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 02:59:01.052032 3805055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 02:59:01.081137 3805055 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 02:59:01.081275 3805055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 02:59:01.128589 3805055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 02:59:01.128659 3805055 start.go:495] detecting cgroup driver to use...
	I0127 02:59:01.128709 3805055 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 02:59:01.128790 3805055 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 02:59:01.143321 3805055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 02:59:01.160815 3805055 docker.go:217] disabling cri-docker service (if available) ...
	I0127 02:59:01.160959 3805055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 02:59:01.176836 3805055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 02:59:01.192847 3805055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 02:59:01.283281 3805055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 02:59:01.395721 3805055 docker.go:233] disabling docker service ...
	I0127 02:59:01.395836 3805055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 02:59:01.422213 3805055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 02:59:01.436851 3805055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 02:59:01.535609 3805055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 02:59:01.641294 3805055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 02:59:01.653970 3805055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 02:59:01.673075 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 02:59:01.684308 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 02:59:01.696310 3805055 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 02:59:01.696409 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 02:59:01.708347 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:59:01.720044 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 02:59:01.730981 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 02:59:01.742186 3805055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 02:59:01.768099 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 02:59:01.779951 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 02:59:01.791710 3805055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 02:59:01.804689 3805055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 02:59:01.816592 3805055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 02:59:01.828100 3805055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:59:01.916457 3805055 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 02:59:02.108620 3805055 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 02:59:02.108713 3805055 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 02:59:02.112833 3805055 start.go:563] Will wait 60s for crictl version
	I0127 02:59:02.112917 3805055 ssh_runner.go:195] Run: which crictl
	I0127 02:59:02.116710 3805055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 02:59:02.159616 3805055 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0127 02:59:02.159704 3805055 ssh_runner.go:195] Run: containerd --version
	I0127 02:59:02.187826 3805055 ssh_runner.go:195] Run: containerd --version
	I0127 02:59:02.217457 3805055 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
	I0127 02:58:58.347216 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:00.348519 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:02.850554 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:02.220565 3805055 cli_runner.go:164] Run: docker network inspect embed-certs-579827 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 02:59:02.239384 3805055 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0127 02:59:02.249049 3805055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:59:02.267091 3805055 kubeadm.go:883] updating cluster {Name:embed-certs-579827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-579827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 02:59:02.267220 3805055 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:59:02.267289 3805055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:59:02.309732 3805055 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:59:02.309756 3805055 containerd.go:534] Images already preloaded, skipping extraction
	I0127 02:59:02.309824 3805055 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 02:59:02.353113 3805055 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 02:59:02.353137 3805055 cache_images.go:84] Images are preloaded, skipping loading
	I0127 02:59:02.353146 3805055 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.1 containerd true true} ...
	I0127 02:59:02.353246 3805055 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-579827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-579827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 02:59:02.353314 3805055 ssh_runner.go:195] Run: sudo crictl info
	I0127 02:59:02.389994 3805055 cni.go:84] Creating CNI manager for ""
	I0127 02:59:02.390019 3805055 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:59:02.390029 3805055 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 02:59:02.390054 3805055 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-579827 NodeName:embed-certs-579827 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 02:59:02.390236 3805055 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-579827"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 02:59:02.390313 3805055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 02:59:02.399436 3805055 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 02:59:02.399506 3805055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 02:59:02.407941 3805055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0127 02:59:02.425501 3805055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 02:59:02.445299 3805055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 02:59:02.465491 3805055 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0127 02:59:02.469104 3805055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 02:59:02.480061 3805055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:59:02.568853 3805055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:59:02.587507 3805055 certs.go:68] Setting up /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827 for IP: 192.168.85.2
	I0127 02:59:02.587529 3805055 certs.go:194] generating shared ca certs ...
	I0127 02:59:02.587547 3805055 certs.go:226] acquiring lock for ca certs: {Name:mk1bae14ef6af74439063c8478bc03213541b880 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:02.587706 3805055 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.key
	I0127 02:59:02.587763 3805055 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.key
	I0127 02:59:02.587777 3805055 certs.go:256] generating profile certs ...
	I0127 02:59:02.587832 3805055 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/client.key
	I0127 02:59:02.587863 3805055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/client.crt with IP's: []
	I0127 02:59:03.004859 3805055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/client.crt ...
	I0127 02:59:03.004898 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/client.crt: {Name:mk2349337b04cc796d0ccd01c7e7f52567bf5ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:03.005853 3805055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/client.key ...
	I0127 02:59:03.005889 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/client.key: {Name:mkd36f59bfa0511f2108228285c84f37d7291395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:03.006048 3805055 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.key.5fdd9838
	I0127 02:59:03.006073 3805055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.crt.5fdd9838 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0127 02:59:03.754136 3805055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.crt.5fdd9838 ...
	I0127 02:59:03.754170 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.crt.5fdd9838: {Name:mkcf3478eb40e9e410dc985be0d692761f1dde13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:03.754969 3805055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.key.5fdd9838 ...
	I0127 02:59:03.754998 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.key.5fdd9838: {Name:mka2ddcf7c67e115239a50ce4f6f709436c73708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:03.755108 3805055 certs.go:381] copying /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.crt.5fdd9838 -> /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.crt
	I0127 02:59:03.755189 3805055 certs.go:385] copying /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.key.5fdd9838 -> /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.key
	I0127 02:59:03.755254 3805055 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.key
	I0127 02:59:03.755272 3805055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.crt with IP's: []
	I0127 02:59:05.174318 3805055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.crt ...
	I0127 02:59:05.174355 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.crt: {Name:mk58fb4d2c3b6b4c53d68994e01a7884d31fdf1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:05.174565 3805055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.key ...
	I0127 02:59:05.174581 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.key: {Name:mk7d9926f4f7d9db2ec4914a5b93168f3f167834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:05.175453 3805055 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800.pem (1338 bytes)
	W0127 02:59:05.175504 3805055 certs.go:480] ignoring /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800_empty.pem, impossibly tiny 0 bytes
	I0127 02:59:05.175514 3805055 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca-key.pem (1675 bytes)
	I0127 02:59:05.175539 3805055 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/ca.pem (1078 bytes)
	I0127 02:59:05.175569 3805055 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/cert.pem (1123 bytes)
	I0127 02:59:05.175597 3805055 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/key.pem (1679 bytes)
	I0127 02:59:05.175643 3805055 certs.go:484] found cert: /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem (1708 bytes)
	I0127 02:59:05.176303 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 02:59:05.204808 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 02:59:05.229790 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 02:59:05.255953 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 02:59:05.283630 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 02:59:05.308931 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 02:59:05.333792 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 02:59:05.362179 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/embed-certs-579827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 02:59:05.386665 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 02:59:05.416572 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/certs/3586800.pem --> /usr/share/ca-certificates/3586800.pem (1338 bytes)
	I0127 02:59:05.443812 3805055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/ssl/certs/35868002.pem --> /usr/share/ca-certificates/35868002.pem (1708 bytes)
	I0127 02:59:05.474399 3805055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 02:59:05.493725 3805055 ssh_runner.go:195] Run: openssl version
	I0127 02:59:05.499674 3805055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 02:59:05.514527 3805055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:59:05.518159 3805055 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 02:08 /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:59:05.518272 3805055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 02:59:05.525200 3805055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 02:59:05.535184 3805055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3586800.pem && ln -fs /usr/share/ca-certificates/3586800.pem /etc/ssl/certs/3586800.pem"
	I0127 02:59:05.545543 3805055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3586800.pem
	I0127 02:59:05.549494 3805055 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 02:16 /usr/share/ca-certificates/3586800.pem
	I0127 02:59:05.549624 3805055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3586800.pem
	I0127 02:59:05.556874 3805055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3586800.pem /etc/ssl/certs/51391683.0"
	I0127 02:59:05.568939 3805055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35868002.pem && ln -fs /usr/share/ca-certificates/35868002.pem /etc/ssl/certs/35868002.pem"
	I0127 02:59:05.579616 3805055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35868002.pem
	I0127 02:59:05.584010 3805055 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 02:16 /usr/share/ca-certificates/35868002.pem
	I0127 02:59:05.584088 3805055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35868002.pem
	I0127 02:59:05.593570 3805055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35868002.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 02:59:05.605837 3805055 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 02:59:05.610198 3805055 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 02:59:05.610258 3805055 kubeadm.go:392] StartCluster: {Name:embed-certs-579827 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-579827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:59:05.610349 3805055 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 02:59:05.610424 3805055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 02:59:05.662338 3805055 cri.go:89] found id: ""
	I0127 02:59:05.662447 3805055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 02:59:05.672025 3805055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 02:59:05.681063 3805055 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 02:59:05.681171 3805055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 02:59:05.690176 3805055 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 02:59:05.690239 3805055 kubeadm.go:157] found existing configuration files:
	
	I0127 02:59:05.690320 3805055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 02:59:05.699478 3805055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 02:59:05.699591 3805055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 02:59:05.708529 3805055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 02:59:05.717440 3805055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 02:59:05.717507 3805055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 02:59:05.726156 3805055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 02:59:05.735798 3805055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 02:59:05.735873 3805055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 02:59:05.744500 3805055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 02:59:05.753702 3805055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 02:59:05.753767 3805055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 02:59:05.762586 3805055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 02:59:05.810861 3805055 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 02:59:05.810954 3805055 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 02:59:05.832396 3805055 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 02:59:05.832477 3805055 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 02:59:05.832530 3805055 kubeadm.go:310] OS: Linux
	I0127 02:59:05.832582 3805055 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 02:59:05.832634 3805055 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 02:59:05.832689 3805055 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 02:59:05.832741 3805055 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 02:59:05.832792 3805055 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 02:59:05.832845 3805055 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 02:59:05.832894 3805055 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 02:59:05.832945 3805055 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 02:59:05.832994 3805055 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 02:59:05.903125 3805055 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 02:59:05.903308 3805055 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 02:59:05.903441 3805055 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 02:59:05.914782 3805055 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 02:59:04.869664 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:07.350731 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:05.918233 3805055 out.go:235]   - Generating certificates and keys ...
	I0127 02:59:05.918462 3805055 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 02:59:05.918570 3805055 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 02:59:06.188915 3805055 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 02:59:07.222392 3805055 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 02:59:08.279331 3805055 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 02:59:08.578156 3805055 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 02:59:09.007871 3805055 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 02:59:09.008250 3805055 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-579827 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0127 02:59:09.849140 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:12.351772 3796111 pod_ready.go:103] pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:12.849550 3796111 pod_ready.go:82] duration metric: took 4m0.00841246s for pod "metrics-server-9975d5f86-mftgr" in "kube-system" namespace to be "Ready" ...
	E0127 02:59:12.849575 3796111 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 02:59:12.849585 3796111 pod_ready.go:39] duration metric: took 5m20.707129361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:59:12.849601 3796111 api_server.go:52] waiting for apiserver process to appear ...
	I0127 02:59:12.849632 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 02:59:12.849779 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 02:59:12.908698 3796111 cri.go:89] found id: "f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:12.908718 3796111 cri.go:89] found id: "ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:12.908724 3796111 cri.go:89] found id: ""
	I0127 02:59:12.908731 3796111 logs.go:282] 2 containers: [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810]
	I0127 02:59:12.908789 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.912398 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.915700 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 02:59:12.915779 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 02:59:12.958472 3796111 cri.go:89] found id: "913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:12.958491 3796111 cri.go:89] found id: "8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:12.958495 3796111 cri.go:89] found id: ""
	I0127 02:59:12.958502 3796111 logs.go:282] 2 containers: [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9]
	I0127 02:59:12.958559 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.962269 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:12.965683 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 02:59:12.965751 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 02:59:13.017093 3796111 cri.go:89] found id: "80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:13.017166 3796111 cri.go:89] found id: "2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:13.017201 3796111 cri.go:89] found id: ""
	I0127 02:59:13.017228 3796111 logs.go:282] 2 containers: [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca]
	I0127 02:59:13.017327 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.021516 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.025341 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 02:59:13.025409 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 02:59:13.080460 3796111 cri.go:89] found id: "6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:13.080485 3796111 cri.go:89] found id: "d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:13.080493 3796111 cri.go:89] found id: ""
	I0127 02:59:13.080502 3796111 logs.go:282] 2 containers: [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3]
	I0127 02:59:13.080571 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.084534 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.088803 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 02:59:13.088877 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 02:59:13.158618 3796111 cri.go:89] found id: "1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:13.158644 3796111 cri.go:89] found id: "a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:13.158650 3796111 cri.go:89] found id: ""
	I0127 02:59:13.158658 3796111 logs.go:282] 2 containers: [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2]
	I0127 02:59:13.158745 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.163024 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.169387 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 02:59:13.169529 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 02:59:13.244336 3796111 cri.go:89] found id: "60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:13.244366 3796111 cri.go:89] found id: "0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:13.244375 3796111 cri.go:89] found id: ""
	I0127 02:59:13.244386 3796111 logs.go:282] 2 containers: [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414]
	I0127 02:59:13.244469 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.248667 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.252725 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 02:59:13.252803 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 02:59:13.300743 3796111 cri.go:89] found id: "792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:13.300767 3796111 cri.go:89] found id: "17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:13.300773 3796111 cri.go:89] found id: ""
	I0127 02:59:13.300781 3796111 logs.go:282] 2 containers: [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a]
	I0127 02:59:13.300838 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.305143 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.309056 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 02:59:13.309127 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 02:59:09.782201 3805055 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 02:59:09.782551 3805055 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-579827 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0127 02:59:09.958467 3805055 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 02:59:10.500355 3805055 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 02:59:11.073771 3805055 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 02:59:11.074066 3805055 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 02:59:11.804359 3805055 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 02:59:12.415719 3805055 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 02:59:13.301526 3805055 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 02:59:14.782627 3805055 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 02:59:15.447904 3805055 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 02:59:15.448933 3805055 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 02:59:15.452383 3805055 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 02:59:13.361247 3796111 cri.go:89] found id: "ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:13.361270 3796111 cri.go:89] found id: "5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:13.361278 3796111 cri.go:89] found id: ""
	I0127 02:59:13.361285 3796111 logs.go:282] 2 containers: [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21]
	I0127 02:59:13.361343 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.365558 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.369392 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 02:59:13.369489 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 02:59:13.420343 3796111 cri.go:89] found id: "3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:13.420366 3796111 cri.go:89] found id: ""
	I0127 02:59:13.420374 3796111 logs.go:282] 1 containers: [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8]
	I0127 02:59:13.420433 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:13.424585 3796111 logs.go:123] Gathering logs for coredns [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477] ...
	I0127 02:59:13.424611 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:13.478057 3796111 logs.go:123] Gathering logs for coredns [2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca] ...
	I0127 02:59:13.478086 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:13.536134 3796111 logs.go:123] Gathering logs for kube-proxy [a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2] ...
	I0127 02:59:13.536162 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:13.604664 3796111 logs.go:123] Gathering logs for kubernetes-dashboard [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8] ...
	I0127 02:59:13.604699 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:13.673791 3796111 logs.go:123] Gathering logs for describe nodes ...
	I0127 02:59:13.673820 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 02:59:13.864687 3796111 logs.go:123] Gathering logs for kube-scheduler [d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3] ...
	I0127 02:59:13.864722 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:13.936957 3796111 logs.go:123] Gathering logs for kube-controller-manager [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e] ...
	I0127 02:59:13.936988 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:14.024358 3796111 logs.go:123] Gathering logs for kube-apiserver [ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810] ...
	I0127 02:59:14.024397 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:14.103841 3796111 logs.go:123] Gathering logs for etcd [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b] ...
	I0127 02:59:14.103876 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:14.203352 3796111 logs.go:123] Gathering logs for etcd [8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9] ...
	I0127 02:59:14.203460 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:14.275290 3796111 logs.go:123] Gathering logs for kube-proxy [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754] ...
	I0127 02:59:14.275372 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:14.335204 3796111 logs.go:123] Gathering logs for kube-controller-manager [0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414] ...
	I0127 02:59:14.335232 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:14.451827 3796111 logs.go:123] Gathering logs for kindnet [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1] ...
	I0127 02:59:14.451917 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:14.528838 3796111 logs.go:123] Gathering logs for container status ...
	I0127 02:59:14.528919 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 02:59:14.607271 3796111 logs.go:123] Gathering logs for dmesg ...
	I0127 02:59:14.607428 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 02:59:14.629049 3796111 logs.go:123] Gathering logs for kube-apiserver [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26] ...
	I0127 02:59:14.629129 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:14.710645 3796111 logs.go:123] Gathering logs for kube-scheduler [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8] ...
	I0127 02:59:14.710736 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:14.765480 3796111 logs.go:123] Gathering logs for kindnet [17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a] ...
	I0127 02:59:14.765553 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:14.828838 3796111 logs.go:123] Gathering logs for storage-provisioner [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae] ...
	I0127 02:59:14.828906 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:14.888808 3796111 logs.go:123] Gathering logs for storage-provisioner [5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21] ...
	I0127 02:59:14.888835 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:14.940906 3796111 logs.go:123] Gathering logs for containerd ...
	I0127 02:59:14.940931 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 02:59:15.014873 3796111 logs.go:123] Gathering logs for kubelet ...
	I0127 02:59:15.014965 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 02:59:15.084568 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967126     660 reflector.go:138] object-"default"/"default-token-gqprk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gqprk" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.084862 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967200     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085108 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967258     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghd6s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghd6s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085384 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967336     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-6zk7s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-6zk7s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085625 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967387     660 reflector.go:138] object-"kube-system"/"coredns-token-l287g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l287g" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.085873 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967447     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.086176 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967496     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-54qrt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-54qrt" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:15.095025 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:53 old-k8s-version-949994 kubelet[660]: E0127 02:53:53.876531     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.095319 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:54 old-k8s-version-949994 kubelet[660]: E0127 02:53:54.021725     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.099091 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:09 old-k8s-version-949994 kubelet[660]: E0127 02:54:09.521072     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.101668 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.278637     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.101911 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.504207     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.102301 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:23 old-k8s-version-949994 kubelet[660]: E0127 02:54:23.282663     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.102683 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:24 old-k8s-version-949994 kubelet[660]: E0127 02:54:24.284977     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.103161 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:26 old-k8s-version-949994 kubelet[660]: E0127 02:54:26.292140     660 pod_workers.go:191] Error syncing pod 2b0aa32b-1180-4a97-8374-d786d139dc2c ("storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"
	W0127 02:59:15.105983 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:37 old-k8s-version-949994 kubelet[660]: E0127 02:54:37.511569     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.106615 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:39 old-k8s-version-949994 kubelet[660]: E0127 02:54:39.335961     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.107194 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:42 old-k8s-version-949994 kubelet[660]: E0127 02:54:42.579255     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.107385 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:48 old-k8s-version-949994 kubelet[660]: E0127 02:54:48.501857     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.107709 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:53 old-k8s-version-949994 kubelet[660]: E0127 02:54:53.502843     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.107890 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:00 old-k8s-version-949994 kubelet[660]: E0127 02:55:00.501841     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.108480 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:07 old-k8s-version-949994 kubelet[660]: E0127 02:55:07.426639     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.108808 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:12 old-k8s-version-949994 kubelet[660]: E0127 02:55:12.579010     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.108989 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:13 old-k8s-version-949994 kubelet[660]: E0127 02:55:13.501799     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.111603 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:24 old-k8s-version-949994 kubelet[660]: E0127 02:55:24.528464     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.111990 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:25 old-k8s-version-949994 kubelet[660]: E0127 02:55:25.501540     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.112206 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:36 old-k8s-version-949994 kubelet[660]: E0127 02:55:36.501984     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.112600 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:37 old-k8s-version-949994 kubelet[660]: E0127 02:55:37.501584     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.112861 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:49 old-k8s-version-949994 kubelet[660]: E0127 02:55:49.505318     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.113505 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:51 old-k8s-version-949994 kubelet[660]: E0127 02:55:51.546741     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.113872 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:52 old-k8s-version-949994 kubelet[660]: E0127 02:55:52.578558     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.114103 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:03 old-k8s-version-949994 kubelet[660]: E0127 02:56:03.501771     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.114473 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:06 old-k8s-version-949994 kubelet[660]: E0127 02:56:06.501392     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.114689 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:14 old-k8s-version-949994 kubelet[660]: E0127 02:56:14.502372     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.115045 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:17 old-k8s-version-949994 kubelet[660]: E0127 02:56:17.501933     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.115268 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:26 old-k8s-version-949994 kubelet[660]: E0127 02:56:26.501683     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.115635 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:32 old-k8s-version-949994 kubelet[660]: E0127 02:56:32.501823     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.115859 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:40 old-k8s-version-949994 kubelet[660]: E0127 02:56:40.501634     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.116289 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:45 old-k8s-version-949994 kubelet[660]: E0127 02:56:45.502780     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.118822 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:55 old-k8s-version-949994 kubelet[660]: E0127 02:56:55.514866     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:15.119188 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:56 old-k8s-version-949994 kubelet[660]: E0127 02:56:56.501563     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.119541 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:07 old-k8s-version-949994 kubelet[660]: E0127 02:57:07.501947     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.119765 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:10 old-k8s-version-949994 kubelet[660]: E0127 02:57:10.502918     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.120388 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:21 old-k8s-version-949994 kubelet[660]: E0127 02:57:21.776454     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.120743 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:22 old-k8s-version-949994 kubelet[660]: E0127 02:57:22.780516     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.120978 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:23 old-k8s-version-949994 kubelet[660]: E0127 02:57:23.505620     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.121189 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:34 old-k8s-version-949994 kubelet[660]: E0127 02:57:34.501538     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.121545 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:35 old-k8s-version-949994 kubelet[660]: E0127 02:57:35.501481     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.121912 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:48 old-k8s-version-949994 kubelet[660]: E0127 02:57:48.503510     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.122150 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:49 old-k8s-version-949994 kubelet[660]: E0127 02:57:49.501969     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.122520 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:59 old-k8s-version-949994 kubelet[660]: E0127 02:57:59.502983     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.122734 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:00 old-k8s-version-949994 kubelet[660]: E0127 02:58:00.501586     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.123100 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:10 old-k8s-version-949994 kubelet[660]: E0127 02:58:10.501269     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.123321 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:15 old-k8s-version-949994 kubelet[660]: E0127 02:58:15.501889     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.123687 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:21 old-k8s-version-949994 kubelet[660]: E0127 02:58:21.501389     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.123897 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:28 old-k8s-version-949994 kubelet[660]: E0127 02:58:28.501733     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.124265 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:34 old-k8s-version-949994 kubelet[660]: E0127 02:58:34.501213     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.124475 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:43 old-k8s-version-949994 kubelet[660]: E0127 02:58:43.501567     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.124832 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.125042 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.125398 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.125620 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.125978 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:15.126008 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:15.126996 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 02:59:15.127126 3796111 out.go:270] X Problems detected in kubelet:
	W0127 02:59:15.127350 3796111 out.go:270]   Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.127379 3796111 out.go:270]   Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.127391 3796111 out.go:270]   Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:15.127397 3796111 out.go:270]   Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:15.127405 3796111 out.go:270]   Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:15.127424 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:15.127438 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:15.455693 3805055 out.go:235]   - Booting up control plane ...
	I0127 02:59:15.455813 3805055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 02:59:15.455890 3805055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 02:59:15.457175 3805055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 02:59:15.471316 3805055 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 02:59:15.477835 3805055 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 02:59:15.478250 3805055 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 02:59:15.588768 3805055 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 02:59:15.588890 3805055 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 02:59:17.090255 3805055 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501460595s
	I0127 02:59:17.090343 3805055 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 02:59:23.591357 3805055 kubeadm.go:310] [api-check] The API server is healthy after 6.501463857s
	I0127 02:59:23.611950 3805055 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 02:59:23.627695 3805055 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 02:59:23.655095 3805055 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 02:59:23.655291 3805055 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-579827 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 02:59:23.667262 3805055 kubeadm.go:310] [bootstrap-token] Using token: iy101u.cl9rxc7eln6chzaj
	I0127 02:59:23.670177 3805055 out.go:235]   - Configuring RBAC rules ...
	I0127 02:59:23.670306 3805055 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 02:59:23.674650 3805055 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 02:59:23.686426 3805055 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 02:59:23.690406 3805055 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 02:59:23.696577 3805055 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 02:59:23.700991 3805055 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 02:59:24.001161 3805055 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 02:59:24.446621 3805055 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 02:59:25.002579 3805055 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 02:59:25.002603 3805055 kubeadm.go:310] 
	I0127 02:59:25.002665 3805055 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 02:59:25.002671 3805055 kubeadm.go:310] 
	I0127 02:59:25.002748 3805055 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 02:59:25.002753 3805055 kubeadm.go:310] 
	I0127 02:59:25.002779 3805055 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 02:59:25.002838 3805055 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 02:59:25.002897 3805055 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 02:59:25.002903 3805055 kubeadm.go:310] 
	I0127 02:59:25.002974 3805055 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 02:59:25.002980 3805055 kubeadm.go:310] 
	I0127 02:59:25.003027 3805055 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 02:59:25.003035 3805055 kubeadm.go:310] 
	I0127 02:59:25.003087 3805055 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 02:59:25.003163 3805055 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 02:59:25.003231 3805055 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 02:59:25.003237 3805055 kubeadm.go:310] 
	I0127 02:59:25.003321 3805055 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 02:59:25.003399 3805055 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 02:59:25.003404 3805055 kubeadm.go:310] 
	I0127 02:59:25.003489 3805055 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iy101u.cl9rxc7eln6chzaj \
	I0127 02:59:25.003593 3805055 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:83891a1b2b837c79fabbfd6fe62cd9786dc4221059a44014b5acb94babe950cd \
	I0127 02:59:25.003614 3805055 kubeadm.go:310] 	--control-plane 
	I0127 02:59:25.003618 3805055 kubeadm.go:310] 
	I0127 02:59:25.003716 3805055 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 02:59:25.003722 3805055 kubeadm.go:310] 
	I0127 02:59:25.003805 3805055 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iy101u.cl9rxc7eln6chzaj \
	I0127 02:59:25.003907 3805055 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:83891a1b2b837c79fabbfd6fe62cd9786dc4221059a44014b5acb94babe950cd 
	I0127 02:59:25.009437 3805055 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 02:59:25.009666 3805055 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 02:59:25.009772 3805055 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 02:59:25.009790 3805055 cni.go:84] Creating CNI manager for ""
	I0127 02:59:25.009798 3805055 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:59:25.013126 3805055 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 02:59:25.128192 3796111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:59:25.142509 3796111 api_server.go:72] duration metric: took 5m53.807437252s to wait for apiserver process to appear ...
	I0127 02:59:25.142534 3796111 api_server.go:88] waiting for apiserver healthz status ...
	I0127 02:59:25.142569 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 02:59:25.142630 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 02:59:25.218274 3796111 cri.go:89] found id: "f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:25.218294 3796111 cri.go:89] found id: "ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:25.218299 3796111 cri.go:89] found id: ""
	I0127 02:59:25.218306 3796111 logs.go:282] 2 containers: [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810]
	I0127 02:59:25.218366 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.223228 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.233535 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 02:59:25.233608 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 02:59:25.287247 3796111 cri.go:89] found id: "913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:25.287268 3796111 cri.go:89] found id: "8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:25.287273 3796111 cri.go:89] found id: ""
	I0127 02:59:25.287281 3796111 logs.go:282] 2 containers: [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9]
	I0127 02:59:25.287350 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.291869 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.296041 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 02:59:25.296114 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 02:59:25.353617 3796111 cri.go:89] found id: "80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:25.353636 3796111 cri.go:89] found id: "2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:25.353641 3796111 cri.go:89] found id: ""
	I0127 02:59:25.353648 3796111 logs.go:282] 2 containers: [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca]
	I0127 02:59:25.353712 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.358444 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.362671 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 02:59:25.362745 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 02:59:25.420248 3796111 cri.go:89] found id: "6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:25.420268 3796111 cri.go:89] found id: "d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:25.420273 3796111 cri.go:89] found id: ""
	I0127 02:59:25.420280 3796111 logs.go:282] 2 containers: [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3]
	I0127 02:59:25.420338 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.425743 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.432269 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 02:59:25.432340 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 02:59:25.493625 3796111 cri.go:89] found id: "1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:25.493696 3796111 cri.go:89] found id: "a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:25.493715 3796111 cri.go:89] found id: ""
	I0127 02:59:25.493738 3796111 logs.go:282] 2 containers: [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2]
	I0127 02:59:25.493833 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.499566 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.504443 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 02:59:25.504514 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 02:59:25.580657 3796111 cri.go:89] found id: "60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:25.580678 3796111 cri.go:89] found id: "0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:25.580683 3796111 cri.go:89] found id: ""
	I0127 02:59:25.580690 3796111 logs.go:282] 2 containers: [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414]
	I0127 02:59:25.580745 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.587524 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.592431 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 02:59:25.592584 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 02:59:25.652963 3796111 cri.go:89] found id: "792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:25.653037 3796111 cri.go:89] found id: "17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:25.653056 3796111 cri.go:89] found id: ""
	I0127 02:59:25.653080 3796111 logs.go:282] 2 containers: [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a]
	I0127 02:59:25.653174 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.658424 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.664336 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 02:59:25.664466 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 02:59:25.739132 3796111 cri.go:89] found id: "3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:25.739205 3796111 cri.go:89] found id: ""
	I0127 02:59:25.739229 3796111 logs.go:282] 1 containers: [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8]
	I0127 02:59:25.739320 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.743622 3796111 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 02:59:25.743746 3796111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 02:59:25.846595 3796111 cri.go:89] found id: "ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:25.846658 3796111 cri.go:89] found id: "5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:25.846686 3796111 cri.go:89] found id: ""
	I0127 02:59:25.846705 3796111 logs.go:282] 2 containers: [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21]
	I0127 02:59:25.846798 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.851520 3796111 ssh_runner.go:195] Run: which crictl
	I0127 02:59:25.856633 3796111 logs.go:123] Gathering logs for kindnet [17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a] ...
	I0127 02:59:25.856708 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a"
	I0127 02:59:25.916964 3796111 logs.go:123] Gathering logs for storage-provisioner [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae] ...
	I0127 02:59:25.917144 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae"
	I0127 02:59:25.972487 3796111 logs.go:123] Gathering logs for containerd ...
	I0127 02:59:25.972512 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 02:59:26.054510 3796111 logs.go:123] Gathering logs for describe nodes ...
	I0127 02:59:26.054548 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 02:59:26.207339 3796111 logs.go:123] Gathering logs for kube-apiserver [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26] ...
	I0127 02:59:26.207370 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26"
	I0127 02:59:26.265602 3796111 logs.go:123] Gathering logs for etcd [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b] ...
	I0127 02:59:26.265637 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b"
	I0127 02:59:26.318707 3796111 logs.go:123] Gathering logs for kube-scheduler [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8] ...
	I0127 02:59:26.318739 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8"
	I0127 02:59:26.361129 3796111 logs.go:123] Gathering logs for kube-proxy [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754] ...
	I0127 02:59:26.361156 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754"
	I0127 02:59:26.413909 3796111 logs.go:123] Gathering logs for kube-controller-manager [0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414] ...
	I0127 02:59:26.413937 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414"
	I0127 02:59:26.493497 3796111 logs.go:123] Gathering logs for kindnet [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1] ...
	I0127 02:59:26.493585 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1"
	I0127 02:59:26.544762 3796111 logs.go:123] Gathering logs for kubernetes-dashboard [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8] ...
	I0127 02:59:26.544794 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8"
	I0127 02:59:26.595658 3796111 logs.go:123] Gathering logs for dmesg ...
	I0127 02:59:26.595688 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 02:59:26.613129 3796111 logs.go:123] Gathering logs for kube-apiserver [ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810] ...
	I0127 02:59:26.613204 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810"
	I0127 02:59:26.669414 3796111 logs.go:123] Gathering logs for coredns [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477] ...
	I0127 02:59:26.669490 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477"
	I0127 02:59:26.717840 3796111 logs.go:123] Gathering logs for coredns [2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca] ...
	I0127 02:59:26.717869 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca"
	I0127 02:59:26.757167 3796111 logs.go:123] Gathering logs for kube-proxy [a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2] ...
	I0127 02:59:26.757196 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2"
	I0127 02:59:26.798051 3796111 logs.go:123] Gathering logs for container status ...
	I0127 02:59:26.798086 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 02:59:26.845207 3796111 logs.go:123] Gathering logs for kubelet ...
	I0127 02:59:26.845236 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 02:59:26.909018 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967126     660 reflector.go:138] object-"default"/"default-token-gqprk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gqprk" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909268 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967200     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909505 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967258     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghd6s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghd6s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909756 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967336     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-6zk7s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-6zk7s" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.909988 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967387     660 reflector.go:138] object-"kube-system"/"coredns-token-l287g": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-l287g" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.910228 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967447     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.910466 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:51 old-k8s-version-949994 kubelet[660]: E0127 02:53:51.967496     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-54qrt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-54qrt" is forbidden: User "system:node:old-k8s-version-949994" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-949994' and this object
	W0127 02:59:26.918459 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:53 old-k8s-version-949994 kubelet[660]: E0127 02:53:53.876531     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.918698 3796111 logs.go:138] Found kubelet problem: Jan 27 02:53:54 old-k8s-version-949994 kubelet[660]: E0127 02:53:54.021725     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.925130 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:09 old-k8s-version-949994 kubelet[660]: E0127 02:54:09.521072     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.927635 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.278637     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.927828 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:22 old-k8s-version-949994 kubelet[660]: E0127 02:54:22.504207     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.928157 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:23 old-k8s-version-949994 kubelet[660]: E0127 02:54:23.282663     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.928494 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:24 old-k8s-version-949994 kubelet[660]: E0127 02:54:24.284977     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.929099 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:26 old-k8s-version-949994 kubelet[660]: E0127 02:54:26.292140     660 pod_workers.go:191] Error syncing pod 2b0aa32b-1180-4a97-8374-d786d139dc2c ("storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2b0aa32b-1180-4a97-8374-d786d139dc2c)"
	W0127 02:59:26.932158 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:37 old-k8s-version-949994 kubelet[660]: E0127 02:54:37.511569     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.932846 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:39 old-k8s-version-949994 kubelet[660]: E0127 02:54:39.335961     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.933330 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:42 old-k8s-version-949994 kubelet[660]: E0127 02:54:42.579255     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.933538 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:48 old-k8s-version-949994 kubelet[660]: E0127 02:54:48.501857     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.933886 3796111 logs.go:138] Found kubelet problem: Jan 27 02:54:53 old-k8s-version-949994 kubelet[660]: E0127 02:54:53.502843     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.934090 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:00 old-k8s-version-949994 kubelet[660]: E0127 02:55:00.501841     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.934727 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:07 old-k8s-version-949994 kubelet[660]: E0127 02:55:07.426639     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.935133 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:12 old-k8s-version-949994 kubelet[660]: E0127 02:55:12.579010     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.935339 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:13 old-k8s-version-949994 kubelet[660]: E0127 02:55:13.501799     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.938047 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:24 old-k8s-version-949994 kubelet[660]: E0127 02:55:24.528464     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.938415 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:25 old-k8s-version-949994 kubelet[660]: E0127 02:55:25.501540     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.938630 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:36 old-k8s-version-949994 kubelet[660]: E0127 02:55:36.501984     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.938994 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:37 old-k8s-version-949994 kubelet[660]: E0127 02:55:37.501584     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.939204 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:49 old-k8s-version-949994 kubelet[660]: E0127 02:55:49.505318     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.939825 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:51 old-k8s-version-949994 kubelet[660]: E0127 02:55:51.546741     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.940171 3796111 logs.go:138] Found kubelet problem: Jan 27 02:55:52 old-k8s-version-949994 kubelet[660]: E0127 02:55:52.578558     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.940412 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:03 old-k8s-version-949994 kubelet[660]: E0127 02:56:03.501771     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.940764 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:06 old-k8s-version-949994 kubelet[660]: E0127 02:56:06.501392     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.940981 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:14 old-k8s-version-949994 kubelet[660]: E0127 02:56:14.502372     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.941335 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:17 old-k8s-version-949994 kubelet[660]: E0127 02:56:17.501933     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.941568 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:26 old-k8s-version-949994 kubelet[660]: E0127 02:56:26.501683     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.941996 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:32 old-k8s-version-949994 kubelet[660]: E0127 02:56:32.501823     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.942196 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:40 old-k8s-version-949994 kubelet[660]: E0127 02:56:40.501634     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.942555 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:45 old-k8s-version-949994 kubelet[660]: E0127 02:56:45.502780     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.945106 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:55 old-k8s-version-949994 kubelet[660]: E0127 02:56:55.514866     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 02:59:26.945463 3796111 logs.go:138] Found kubelet problem: Jan 27 02:56:56 old-k8s-version-949994 kubelet[660]: E0127 02:56:56.501563     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.945853 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:07 old-k8s-version-949994 kubelet[660]: E0127 02:57:07.501947     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.946094 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:10 old-k8s-version-949994 kubelet[660]: E0127 02:57:10.502918     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.946801 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:21 old-k8s-version-949994 kubelet[660]: E0127 02:57:21.776454     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.947158 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:22 old-k8s-version-949994 kubelet[660]: E0127 02:57:22.780516     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.947364 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:23 old-k8s-version-949994 kubelet[660]: E0127 02:57:23.505620     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.947572 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:34 old-k8s-version-949994 kubelet[660]: E0127 02:57:34.501538     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.947920 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:35 old-k8s-version-949994 kubelet[660]: E0127 02:57:35.501481     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.948269 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:48 old-k8s-version-949994 kubelet[660]: E0127 02:57:48.503510     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.948518 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:49 old-k8s-version-949994 kubelet[660]: E0127 02:57:49.501969     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.949022 3796111 logs.go:138] Found kubelet problem: Jan 27 02:57:59 old-k8s-version-949994 kubelet[660]: E0127 02:57:59.502983     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.949233 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:00 old-k8s-version-949994 kubelet[660]: E0127 02:58:00.501586     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.949595 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:10 old-k8s-version-949994 kubelet[660]: E0127 02:58:10.501269     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.949802 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:15 old-k8s-version-949994 kubelet[660]: E0127 02:58:15.501889     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.950158 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:21 old-k8s-version-949994 kubelet[660]: E0127 02:58:21.501389     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.950395 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:28 old-k8s-version-949994 kubelet[660]: E0127 02:58:28.501733     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.950751 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:34 old-k8s-version-949994 kubelet[660]: E0127 02:58:34.501213     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.950961 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:43 old-k8s-version-949994 kubelet[660]: E0127 02:58:43.501567     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.951309 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.951528 3796111 logs.go:138] Found kubelet problem: Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.951914 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.952124 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.952532 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:26.953020 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:18 old-k8s-version-949994 kubelet[660]: E0127 02:59:18.502650     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:26.953379 3796111 logs.go:138] Found kubelet problem: Jan 27 02:59:26 old-k8s-version-949994 kubelet[660]: E0127 02:59:26.501368     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:26.953395 3796111 logs.go:123] Gathering logs for etcd [8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9] ...
	I0127 02:59:26.953420 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9"
	I0127 02:59:27.014044 3796111 logs.go:123] Gathering logs for kube-scheduler [d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3] ...
	I0127 02:59:27.014093 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3"
	I0127 02:59:27.059697 3796111 logs.go:123] Gathering logs for kube-controller-manager [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e] ...
	I0127 02:59:27.059730 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e"
	I0127 02:59:27.129429 3796111 logs.go:123] Gathering logs for storage-provisioner [5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21] ...
	I0127 02:59:27.129470 3796111 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21"
	I0127 02:59:27.169921 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:27.169952 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 02:59:27.170000 3796111 out.go:270] X Problems detected in kubelet:
	W0127 02:59:27.170013 3796111 out.go:270]   Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:27.170021 3796111 out.go:270]   Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:27.170035 3796111 out.go:270]   Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	W0127 02:59:27.170041 3796111 out.go:270]   Jan 27 02:59:18 old-k8s-version-949994 kubelet[660]: E0127 02:59:18.502650     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 02:59:27.170051 3796111 out.go:270]   Jan 27 02:59:26 old-k8s-version-949994 kubelet[660]: E0127 02:59:26.501368     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	I0127 02:59:27.170057 3796111 out.go:358] Setting ErrFile to fd 2...
	I0127 02:59:27.170066 3796111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:59:25.016018 3805055 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 02:59:25.020386 3805055 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 02:59:25.020408 3805055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 02:59:25.041216 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 02:59:25.547951 3805055 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 02:59:25.548107 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:25.548155 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-579827 minikube.k8s.io/updated_at=2025_01_27T02_59_25_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95 minikube.k8s.io/name=embed-certs-579827 minikube.k8s.io/primary=true
	I0127 02:59:25.869409 3805055 ops.go:34] apiserver oom_adj: -16
	I0127 02:59:25.869529 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:26.370214 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:26.869629 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:27.369565 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:27.870408 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:28.369634 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:28.870528 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:29.370301 3805055 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 02:59:29.496179 3805055 kubeadm.go:1113] duration metric: took 3.948138748s to wait for elevateKubeSystemPrivileges
	I0127 02:59:29.496205 3805055 kubeadm.go:394] duration metric: took 23.885958547s to StartCluster
	I0127 02:59:29.496222 3805055 settings.go:142] acquiring lock: {Name:mk735c76882f337c2ca62b3dd2d1bbcced4c92cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:29.496282 3805055 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:59:29.497684 3805055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/kubeconfig: {Name:mkc8ad8c78feebc7c27d31aea066c6fc5e1767bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:59:29.497897 3805055 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 02:59:29.498034 3805055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 02:59:29.498408 3805055 config.go:182] Loaded profile config "embed-certs-579827": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:59:29.498457 3805055 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 02:59:29.498534 3805055 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-579827"
	I0127 02:59:29.498555 3805055 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-579827"
	I0127 02:59:29.498580 3805055 host.go:66] Checking if "embed-certs-579827" exists ...
	I0127 02:59:29.498961 3805055 addons.go:69] Setting default-storageclass=true in profile "embed-certs-579827"
	I0127 02:59:29.498984 3805055 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-579827"
	I0127 02:59:29.499190 3805055 cli_runner.go:164] Run: docker container inspect embed-certs-579827 --format={{.State.Status}}
	I0127 02:59:29.499310 3805055 cli_runner.go:164] Run: docker container inspect embed-certs-579827 --format={{.State.Status}}
	I0127 02:59:29.500958 3805055 out.go:177] * Verifying Kubernetes components...
	I0127 02:59:29.505053 3805055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 02:59:29.538004 3805055 addons.go:238] Setting addon default-storageclass=true in "embed-certs-579827"
	I0127 02:59:29.538045 3805055 host.go:66] Checking if "embed-certs-579827" exists ...
	I0127 02:59:29.538614 3805055 cli_runner.go:164] Run: docker container inspect embed-certs-579827 --format={{.State.Status}}
	I0127 02:59:29.547151 3805055 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 02:59:29.550120 3805055 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:59:29.550147 3805055 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 02:59:29.550214 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:59:29.586961 3805055 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 02:59:29.586984 3805055 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 02:59:29.587046 3805055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-579827
	I0127 02:59:29.592981 3805055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37791 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa Username:docker}
	I0127 02:59:29.618740 3805055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37791 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/embed-certs-579827/id_rsa Username:docker}
	I0127 02:59:29.854998 3805055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 02:59:29.884895 3805055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 02:59:29.884999 3805055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 02:59:29.889554 3805055 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 02:59:30.915508 3805055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.060474429s)
	I0127 02:59:30.915624 3805055 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.030606968s)
	I0127 02:59:30.915682 3805055 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.030764224s)
	I0127 02:59:30.915836 3805055 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0127 02:59:30.915703 3805055 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026131813s)
	I0127 02:59:30.918265 3805055 node_ready.go:35] waiting up to 6m0s for node "embed-certs-579827" to be "Ready" ...
	I0127 02:59:30.966839 3805055 node_ready.go:49] node "embed-certs-579827" has status "Ready":"True"
	I0127 02:59:30.966914 3805055 node_ready.go:38] duration metric: took 48.525691ms for node "embed-certs-579827" to be "Ready" ...
	I0127 02:59:30.966941 3805055 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 02:59:30.983581 3805055 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-frvqr" in "kube-system" namespace to be "Ready" ...
	I0127 02:59:30.993289 3805055 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 02:59:30.996202 3805055 addons.go:514] duration metric: took 1.497733224s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 02:59:31.425521 3805055 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-579827" context rescaled to 1 replicas
	I0127 02:59:32.989678 3805055 pod_ready.go:103] pod "coredns-668d6bf9bc-frvqr" in "kube-system" namespace has status "Ready":"False"
	I0127 02:59:37.172121 3796111 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 02:59:37.181232 3796111 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 02:59:37.184443 3796111 out.go:201] 
	W0127 02:59:37.187360 3796111 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0127 02:59:37.187406 3796111 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0127 02:59:37.187429 3796111 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0127 02:59:37.187435 3796111 out.go:270] * 
	W0127 02:59:37.188343 3796111 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 02:59:37.192140 3796111 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	8e12f3bd92557       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   f91133a68a94e       dashboard-metrics-scraper-8d5bb5db8-wvx62
	ca016e85c640c       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         3                   e5efb2d9c6319       storage-provisioner
	3e757cf47e5bf       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   c7ce7653d9b35       kubernetes-dashboard-cd95d586-9bnwh
	792e1bf0751e8       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   a335d5e635a4a       kindnet-bcq52
	5a9b35cdfa2e3       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   a0efc575c5ac5       busybox
	1eebf1b8ced69       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   db6072908f129       kube-proxy-5hzlg
	80de12a579f09       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   ec3d66ddfd0e1       coredns-74ff55c5b-fbwzt
	5966fc744604d       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   e5efb2d9c6319       storage-provisioner
	6868e5588e251       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   d5fd4e74286f7       kube-scheduler-old-k8s-version-949994
	913dba7b3bede       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   c8ab2292cc8e0       etcd-old-k8s-version-949994
	60bc065c8667a       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   4ba1d572c629b       kube-controller-manager-old-k8s-version-949994
	f1ff631138f97       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   6e298968924d7       kube-apiserver-old-k8s-version-949994
	70b45a2ee4146       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   52dca43cebcb2       busybox
	2333f389a3e6e       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   5c3278ebdcc94       coredns-74ff55c5b-fbwzt
	17ad11f28e207       2be0bcf609c65       8 minutes ago       Exited              kindnet-cni                 0                   f789da568db34       kindnet-bcq52
	a7fa3720da9a7       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   a6ca0b0ba5cd7       kube-proxy-5hzlg
	d422478362adb       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   9c150a719d661       kube-scheduler-old-k8s-version-949994
	8d9862575a02a       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   fa9d5d288f999       etcd-old-k8s-version-949994
	0f4ff9b6b17b8       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   804bea710a235       kube-controller-manager-old-k8s-version-949994
	ae3fb4241ae87       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   429c8054ccaae       kube-apiserver-old-k8s-version-949994
	
	
	==> containerd <==
	Jan 27 02:55:24 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:24.527176205Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.519167064Z" level=info msg="CreateContainer within sandbox \"f91133a68a94ed3a34acfcf0d38823f434f11530b2b2abd3ad4e508df67feb08\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.545408010Z" level=info msg="CreateContainer within sandbox \"f91133a68a94ed3a34acfcf0d38823f434f11530b2b2abd3ad4e508df67feb08\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9\""
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.546472983Z" level=info msg="StartContainer for \"6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9\""
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.616540025Z" level=info msg="StartContainer for \"6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9\" returns successfully"
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.616776148Z" level=info msg="received exit event container_id:\"6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9\" id:\"6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9\" pid:3052 exit_status:255 exited_at:{seconds:1737946550 nanos:614414682}"
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.643460155Z" level=info msg="shim disconnected" id=6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9 namespace=k8s.io
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.643667257Z" level=warning msg="cleaning up after shim disconnected" id=6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9 namespace=k8s.io
	Jan 27 02:55:50 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:50.643695580Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 02:55:51 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:51.548162552Z" level=info msg="RemoveContainer for \"0a7a8a94135bf89ed8f15bdee8e8a803456ec1d823f78707e4ca6237f4985b50\""
	Jan 27 02:55:51 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:55:51.554254639Z" level=info msg="RemoveContainer for \"0a7a8a94135bf89ed8f15bdee8e8a803456ec1d823f78707e4ca6237f4985b50\" returns successfully"
	Jan 27 02:56:55 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:56:55.507126436Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:56:55 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:56:55.512174067Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jan 27 02:56:55 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:56:55.514262600Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 02:56:55 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:56:55.514261394Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.503591605Z" level=info msg="CreateContainer within sandbox \"f91133a68a94ed3a34acfcf0d38823f434f11530b2b2abd3ad4e508df67feb08\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.523275531Z" level=info msg="CreateContainer within sandbox \"f91133a68a94ed3a34acfcf0d38823f434f11530b2b2abd3ad4e508df67feb08\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5\""
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.523958890Z" level=info msg="StartContainer for \"8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5\""
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.590953802Z" level=info msg="StartContainer for \"8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5\" returns successfully"
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.590967841Z" level=info msg="received exit event container_id:\"8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5\" id:\"8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5\" pid:3306 exit_status:255 exited_at:{seconds:1737946641 nanos:589629767}"
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.614650087Z" level=info msg="shim disconnected" id=8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5 namespace=k8s.io
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.614707858Z" level=warning msg="cleaning up after shim disconnected" id=8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5 namespace=k8s.io
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.614719550Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.778730721Z" level=info msg="RemoveContainer for \"6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9\""
	Jan 27 02:57:21 old-k8s-version-949994 containerd[569]: time="2025-01-27T02:57:21.795804573Z" level=info msg="RemoveContainer for \"6ed7d41a0aae9c99029d583dfcde00a78d29b24b4f437bf4f802b1ef8b79cff9\" returns successfully"
	
	
	==> coredns [2333f389a3e6eb63094687bfb2df8df33f3fdb8498fc24f92452e8fffd240cca] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56592 - 36868 "HINFO IN 7803141744520437666.6111103402001657934. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016340116s
	
	
	==> coredns [80de12a579f093bfca496c919408519334bb6b6962effaec3c52c2cef2014477] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56612 - 38465 "HINFO IN 231689788770517344.4178049515579576848. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.026532326s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0127 02:54:25.666025       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 02:53:55.665216673 +0000 UTC m=+0.043236747) (total time: 30.000707044s):
	Trace[2019727887]: [30.000707044s] [30.000707044s] END
	I0127 02:54:25.666289       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 02:53:55.665489866 +0000 UTC m=+0.043509949) (total time: 30.000752746s):
	Trace[1427131847]: [30.000752746s] [30.000752746s] END
	E0127 02:54:25.666313       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0127 02:54:25.666290       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 02:54:25.666500       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 02:53:55.665590796 +0000 UTC m=+0.043610862) (total time: 30.0008915s):
	Trace[911902081]: [30.0008915s] [30.0008915s] END
	E0127 02:54:25.666513       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-949994
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-949994
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6bb462d349d93b9bf1c5a4f87817e5e9ea11cc95
	                    minikube.k8s.io/name=old-k8s-version-949994
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T02_50_48_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 02:50:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-949994
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 02:59:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 02:54:42 +0000   Mon, 27 Jan 2025 02:50:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 02:54:42 +0000   Mon, 27 Jan 2025 02:50:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 02:54:42 +0000   Mon, 27 Jan 2025 02:50:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 02:54:42 +0000   Mon, 27 Jan 2025 02:51:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-949994
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f90d26273ab4326af8c0233a4788df8
	  System UUID:                bb5d2b07-83a9-498f-8d6b-125d5070542b
	  Boot ID:                    ed5e2339-9d7b-4ad8-ab13-7fed1ac53390
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 coredns-74ff55c5b-fbwzt                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m36s
	  kube-system                 etcd-old-k8s-version-949994                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m42s
	  kube-system                 kindnet-bcq52                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m36s
	  kube-system                 kube-apiserver-old-k8s-version-949994             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 kube-controller-manager-old-k8s-version-949994    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 kube-proxy-5hzlg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-scheduler-old-k8s-version-949994             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 metrics-server-9975d5f86-mftgr                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m27s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m35s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-wvx62         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-9bnwh               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  9m2s (x5 over 9m2s)    kubelet     Node old-k8s-version-949994 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m2s (x4 over 9m2s)    kubelet     Node old-k8s-version-949994 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m2s (x4 over 9m2s)    kubelet     Node old-k8s-version-949994 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m43s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m43s                  kubelet     Node old-k8s-version-949994 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m43s                  kubelet     Node old-k8s-version-949994 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m43s                  kubelet     Node old-k8s-version-949994 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m36s                  kubelet     Node old-k8s-version-949994 status is now: NodeReady
	  Normal  Starting                 8m33s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-949994 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-949994 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x7 over 5m59s)  kubelet     Node old-k8s-version-949994 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m43s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan27 01:33] systemd-journald[221]: Failed to send WATCHDOG=1 notification message: Connection refused
	[Jan27 01:42] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/28/fs': -2
	[Jan27 02:53] systemd-journald[223]: Failed to send WATCHDOG=1 notification message: Connection refused
	
	
	==> etcd [8d9862575a02aedaa20cbdbc29f350c7d048f408b17d76e6899b001882f093b9] <==
	raft2025/01/27 02:50:37 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2025-01-27 02:50:37.111434 I | etcdserver: setting up the initial cluster version to 3.4
	2025-01-27 02:50:37.111588 I | embed: listening for metrics on http://127.0.0.1:2381
	2025-01-27 02:50:37.111779 I | embed: listening for peers on 192.168.76.2:2380
	2025-01-27 02:50:37.114365 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-01-27 02:50:37.126237 I | etcdserver/api: enabled capabilities for version 3.4
	2025-01-27 02:50:37.127590 I | etcdserver: published {Name:old-k8s-version-949994 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2025-01-27 02:50:37.127738 I | embed: ready to serve client requests
	2025-01-27 02:50:37.129216 I | embed: serving client requests on 192.168.76.2:2379
	2025-01-27 02:50:37.139094 I | embed: ready to serve client requests
	2025-01-27 02:50:37.140595 I | embed: serving client requests on 127.0.0.1:2379
	2025-01-27 02:50:46.133996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:51:06.118823 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:51:12.598045 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:51:22.597983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:51:32.596984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:51:42.596843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:51:52.596886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:52:02.596989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:52:12.597028 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:52:22.596967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:52:32.596938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:52:42.596787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:52:52.596828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:53:02.596852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [913dba7b3bede3cb8e1b77be0a25d981b94c7ef9011ec91d80f77d6b7afc771b] <==
	2025-01-27 02:55:36.261736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:55:46.261205 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:55:56.261226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:56:06.261221 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:56:16.261232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:56:26.261482 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:56:36.261206 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:56:46.261258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:56:56.261216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:57:06.261128 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:57:16.261221 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:57:26.261223 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:57:36.261330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:57:46.261323 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:57:56.261275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:58:06.261145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:58:16.261317 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:58:26.261192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:58:36.261134 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:58:46.263745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:58:56.261225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:59:06.261400 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:59:16.261491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:59:26.261344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 02:59:36.261311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 02:59:38 up 1 day,  1:42,  0 users,  load average: 4.50, 2.82, 3.01
	Linux old-k8s-version-949994 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [17ad11f28e20777140f3d682704f30e56a34c3f85630bf0758795ca29f6f719a] <==
	I0127 02:51:06.684160       1 controller.go:401] Syncing nftables rules
	I0127 02:51:16.491517       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:51:16.491588       1 main.go:301] handling current node
	I0127 02:51:26.483246       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:51:26.483280       1 main.go:301] handling current node
	I0127 02:51:36.483217       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:51:36.483254       1 main.go:301] handling current node
	I0127 02:51:46.492208       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:51:46.492244       1 main.go:301] handling current node
	I0127 02:51:56.492004       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:51:56.492040       1 main.go:301] handling current node
	I0127 02:52:06.483523       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:52:06.483557       1 main.go:301] handling current node
	I0127 02:52:16.491291       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:52:16.492304       1 main.go:301] handling current node
	I0127 02:52:26.483281       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:52:26.483315       1 main.go:301] handling current node
	I0127 02:52:36.483423       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:52:36.483456       1 main.go:301] handling current node
	I0127 02:52:46.491336       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:52:46.491574       1 main.go:301] handling current node
	I0127 02:52:56.482884       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:52:56.482945       1 main.go:301] handling current node
	I0127 02:53:06.483150       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:53:06.483245       1 main.go:301] handling current node
	
	
	==> kindnet [792e1bf0751e8fad4834403cd7db06cfbd9065c482e6c7b3bcf4c0c2b1a373d1] <==
	I0127 02:57:36.891690       1 main.go:301] handling current node
	I0127 02:57:46.891419       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:57:46.891466       1 main.go:301] handling current node
	I0127 02:57:56.882968       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:57:56.883082       1 main.go:301] handling current node
	I0127 02:58:06.889341       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:58:06.889445       1 main.go:301] handling current node
	I0127 02:58:16.890242       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:58:16.890279       1 main.go:301] handling current node
	I0127 02:58:26.892254       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:58:26.892287       1 main.go:301] handling current node
	I0127 02:58:36.890251       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:58:36.890293       1 main.go:301] handling current node
	I0127 02:58:46.885003       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:58:46.885040       1 main.go:301] handling current node
	I0127 02:58:56.883270       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:58:56.883309       1 main.go:301] handling current node
	I0127 02:59:06.890453       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:59:06.890739       1 main.go:301] handling current node
	I0127 02:59:16.890291       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:59:16.890549       1 main.go:301] handling current node
	I0127 02:59:26.883432       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:59:26.883491       1 main.go:301] handling current node
	I0127 02:59:36.891644       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 02:59:36.891681       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ae3fb4241ae87bccf92d74ec50fcca30938fee4da6ee2129c5306d476811e810] <==
	I0127 02:50:45.069857       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0127 02:50:45.069904       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 02:50:45.095787       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0127 02:50:45.101972       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0127 02:50:45.101998       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0127 02:50:45.669644       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 02:50:45.715869       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0127 02:50:45.829683       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 02:50:45.832218       1 controller.go:606] quota admission added evaluator for: endpoints
	I0127 02:50:45.842470       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 02:50:46.724017       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0127 02:50:47.282662       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0127 02:50:47.333581       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0127 02:50:55.835195       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 02:51:02.654732       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0127 02:51:02.704658       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0127 02:51:19.812506       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:51:19.812571       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:51:19.812580       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 02:52:01.093675       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:52:01.093807       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:52:01.094047       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 02:52:41.624064       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:52:41.624106       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:52:41.624115       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [f1ff631138f971f0d0175bbdad8dae389b0bb6d344b9c6a0c0ee143eb981fd26] <==
	I0127 02:56:41.660338       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:56:41.660356       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0127 02:56:56.478730       1 handler_proxy.go:102] no RequestInfo found in the context
	E0127 02:56:56.478804       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0127 02:56:56.478813       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 02:57:15.270908       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:57:15.270955       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:57:15.270964       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 02:57:52.799426       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:57:52.799474       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:57:52.799483       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 02:58:25.247963       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:58:25.248073       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:58:25.248102       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0127 02:58:53.035435       1 handler_proxy.go:102] no RequestInfo found in the context
	E0127 02:58:53.035686       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0127 02:58:53.035775       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 02:59:01.977336       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:59:01.977399       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:59:01.977409       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 02:59:34.269911       1 client.go:360] parsed scheme: "passthrough"
	I0127 02:59:34.269955       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 02:59:34.269964       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [0f4ff9b6b17b830c0f2505936c14a703649af8de56a7960df790d5e970f75414] <==
	I0127 02:51:02.771103       1 shared_informer.go:247] Caches are synced for taint 
	I0127 02:51:02.771180       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0127 02:51:02.771233       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-949994. Assuming now as a timestamp.
	I0127 02:51:02.771269       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0127 02:51:02.771311       1 event.go:291] "Event occurred" object="old-k8s-version-949994" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-949994 event: Registered Node old-k8s-version-949994 in Controller"
	I0127 02:51:02.771335       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0127 02:51:02.785044       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-ssrlm"
	I0127 02:51:02.786560       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5hzlg"
	I0127 02:51:02.786579       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bcq52"
	I0127 02:51:02.832454       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-949994" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0127 02:51:02.869928       1 shared_informer.go:247] Caches are synced for HPA 
	I0127 02:51:02.878800       1 shared_informer.go:247] Caches are synced for resource quota 
	I0127 02:51:02.901093       1 shared_informer.go:247] Caches are synced for resource quota 
	E0127 02:51:02.972013       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"b302327d-6f13-4fdb-b1f2-6c0e9db09610", ResourceVersion:"391", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63873543047, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000e45400), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000e45440)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000e45460), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000e45480)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000e454a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001bc43c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e454c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e454e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000e45520)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001b15aa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40014bc5c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400024fc00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40006787f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40014bc618)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	E0127 02:51:02.980563       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"ebd03435-925e-4ca5-959a-b5a6c36a2ccd", ResourceVersion:"402", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63873543048, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40016527e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001652800)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001652820), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001652840)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001652860), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001652880), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40016528a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40016528c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241108-5c6d2daf", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40016528e0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001652920)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001696a20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001688998), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400011c850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000678b58)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40016889e0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0127 02:51:03.048039       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0127 02:51:03.348237       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0127 02:51:03.370345       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0127 02:51:03.370381       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0127 02:51:04.100803       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0127 02:51:04.145837       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ssrlm"
	I0127 02:51:07.771530       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0127 02:53:10.684989       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0127 02:53:10.780764       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0127 02:53:10.781003       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [60bc065c8667a83052202f7fc37006df6160a0f60502485d3fe888d752f6e93e] <==
	W0127 02:55:17.229912       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:55:43.268141       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:55:48.880511       1 request.go:655] Throttling request took 1.048231715s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 02:55:49.732069       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:56:13.773756       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:56:21.382549       1 request.go:655] Throttling request took 1.048254626s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0127 02:56:22.234314       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:56:44.275657       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:56:53.884856       1 request.go:655] Throttling request took 1.048245575s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0127 02:56:54.736183       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:57:14.777467       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:57:26.386788       1 request.go:655] Throttling request took 1.048379285s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 02:57:27.238154       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:57:45.280080       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:57:58.888675       1 request.go:655] Throttling request took 1.048394592s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 02:57:59.740095       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:58:15.782200       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:58:31.390648       1 request.go:655] Throttling request took 1.047709996s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 02:58:32.241994       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:58:46.284419       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:59:03.892684       1 request.go:655] Throttling request took 1.047005175s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0127 02:59:04.744151       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 02:59:16.787020       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 02:59:36.394782       1 request.go:655] Throttling request took 1.048253056s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0127 02:59:37.246557       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [1eebf1b8ced6916d55f7a34166585a17437e60b2716ed0809ac349c450b4e754] <==
	I0127 02:53:55.893785       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0127 02:53:55.893866       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0127 02:53:55.947292       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0127 02:53:55.947453       1 server_others.go:185] Using iptables Proxier.
	I0127 02:53:55.947984       1 server.go:650] Version: v1.20.0
	I0127 02:53:55.948890       1 config.go:315] Starting service config controller
	I0127 02:53:55.948902       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0127 02:53:55.948923       1 config.go:224] Starting endpoint slice config controller
	I0127 02:53:55.948927       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0127 02:53:56.049041       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0127 02:53:56.049114       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [a7fa3720da9a726a9f6f8d791e969823b89c23d447e9434ee4a64f80336f7aa2] <==
	I0127 02:51:05.091156       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0127 02:51:05.091265       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0127 02:51:05.119802       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0127 02:51:05.119989       1 server_others.go:185] Using iptables Proxier.
	I0127 02:51:05.120428       1 server.go:650] Version: v1.20.0
	I0127 02:51:05.121113       1 config.go:315] Starting service config controller
	I0127 02:51:05.121133       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0127 02:51:05.121151       1 config.go:224] Starting endpoint slice config controller
	I0127 02:51:05.121155       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0127 02:51:05.221288       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0127 02:51:05.221289       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [6868e5588e2518b2dbbee5fcdc288ff7639c3e6d27152b25c2d92a94144279e8] <==
	I0127 02:53:45.564830       1 serving.go:331] Generated self-signed cert in-memory
	W0127 02:53:51.972834       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 02:53:51.972887       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 02:53:51.972907       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 02:53:51.972915       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 02:53:52.301790       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0127 02:53:52.324171       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 02:53:52.324194       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 02:53:52.324220       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0127 02:53:52.524333       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [d422478362adb4ddd09d8598be12970c37fc3c672548faefb895f3b8924598a3] <==
	W0127 02:50:44.244930       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 02:50:44.300613       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0127 02:50:44.300956       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 02:50:44.301045       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 02:50:44.301133       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0127 02:50:44.312150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 02:50:44.312558       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 02:50:44.313106       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 02:50:44.313325       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 02:50:44.313549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 02:50:44.313999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 02:50:44.314167       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 02:50:44.314252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 02:50:44.314356       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 02:50:44.320781       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 02:50:44.320888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 02:50:44.320946       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 02:50:45.203043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 02:50:45.297607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 02:50:45.336807       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 02:50:45.366490       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 02:50:45.400896       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 02:50:45.408364       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 02:50:45.454556       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 02:50:45.901206       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 27 02:57:59 old-k8s-version-949994 kubelet[660]: E0127 02:57:59.502983     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:58:00 old-k8s-version-949994 kubelet[660]: E0127 02:58:00.501586     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:58:10 old-k8s-version-949994 kubelet[660]: I0127 02:58:10.500911     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:58:10 old-k8s-version-949994 kubelet[660]: E0127 02:58:10.501269     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:58:15 old-k8s-version-949994 kubelet[660]: E0127 02:58:15.501889     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:58:21 old-k8s-version-949994 kubelet[660]: I0127 02:58:21.501028     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:58:21 old-k8s-version-949994 kubelet[660]: E0127 02:58:21.501389     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:58:28 old-k8s-version-949994 kubelet[660]: E0127 02:58:28.501733     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:58:34 old-k8s-version-949994 kubelet[660]: I0127 02:58:34.500859     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:58:34 old-k8s-version-949994 kubelet[660]: E0127 02:58:34.501213     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:58:43 old-k8s-version-949994 kubelet[660]: E0127 02:58:43.501567     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: I0127 02:58:48.501580     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:58:48 old-k8s-version-949994 kubelet[660]: E0127 02:58:48.502462     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:58:55 old-k8s-version-949994 kubelet[660]: E0127 02:58:55.501821     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: I0127 02:59:00.502291     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:59:00 old-k8s-version-949994 kubelet[660]: E0127 02:59:00.502866     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:59:07 old-k8s-version-949994 kubelet[660]: E0127 02:59:07.502977     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: I0127 02:59:14.500969     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:59:14 old-k8s-version-949994 kubelet[660]: E0127 02:59:14.501372     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:59:18 old-k8s-version-949994 kubelet[660]: E0127 02:59:18.502650     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:59:26 old-k8s-version-949994 kubelet[660]: I0127 02:59:26.500953     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:59:26 old-k8s-version-949994 kubelet[660]: E0127 02:59:26.501368     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	Jan 27 02:59:29 old-k8s-version-949994 kubelet[660]: E0127 02:59:29.509681     660 pod_workers.go:191] Error syncing pod bf8b9c4e-eca6-40a8-98a0-2299a9ab115d ("metrics-server-9975d5f86-mftgr_kube-system(bf8b9c4e-eca6-40a8-98a0-2299a9ab115d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 02:59:38 old-k8s-version-949994 kubelet[660]: I0127 02:59:38.501676     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8e12f3bd92557f98a8be94f4260bf8b0f480431b2afe4106b52f61a99e950ff5
	Jan 27 02:59:38 old-k8s-version-949994 kubelet[660]: E0127 02:59:38.502293     660 pod_workers.go:191] Error syncing pod f3d4b693-4a2e-4c61-8522-2dc1b9aba634 ("dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wvx62_kubernetes-dashboard(f3d4b693-4a2e-4c61-8522-2dc1b9aba634)"
	
	
	==> kubernetes-dashboard [3e757cf47e5bf46aadd2d0209c944ee653ff40decba0a43ed51c38657b7ac9a8] <==
	2025/01/27 02:54:16 Starting overwatch
	2025/01/27 02:54:16 Using namespace: kubernetes-dashboard
	2025/01/27 02:54:16 Using in-cluster config to connect to apiserver
	2025/01/27 02:54:16 Using secret token for csrf signing
	2025/01/27 02:54:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/27 02:54:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/27 02:54:16 Successful initial request to the apiserver, version: v1.20.0
	2025/01/27 02:54:16 Generating JWE encryption key
	2025/01/27 02:54:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/27 02:54:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/27 02:54:16 Initializing JWE encryption key from synchronized object
	2025/01/27 02:54:16 Creating in-cluster Sidecar client
	2025/01/27 02:54:16 Serving insecurely on HTTP port: 9090
	2025/01/27 02:54:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:54:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:55:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:55:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:56:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:56:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:57:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:57:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:58:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:58:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 02:59:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5966fc744604d7bfd40d7701102f774e0d749dd772fdb69dbfbe01827d86cd21] <==
	I0127 02:53:55.482093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0127 02:54:25.492456       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca016e85c640cc59943efbf83715a2c4e1dcf8cc8020f4a480ba90214108c3ae] <==
	I0127 02:54:39.624293       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 02:54:39.660234       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 02:54:39.660496       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 02:54:57.129303       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 02:54:57.129606       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-949994_c5e97a0d-5bbe-4db5-9b53-ae9ec5a6ee23!
	I0127 02:54:57.130373       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"135dff40-1fb5-491e-8d9e-af5665f918e4", APIVersion:"v1", ResourceVersion:"860", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-949994_c5e97a0d-5bbe-4db5-9b53-ae9ec5a6ee23 became leader
	I0127 02:54:57.230498       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-949994_c5e97a0d-5bbe-4db5-9b53-ae9ec5a6ee23!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-949994 -n old-k8s-version-949994
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-949994 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-mftgr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-949994 describe pod metrics-server-9975d5f86-mftgr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-949994 describe pod metrics-server-9975d5f86-mftgr: exit status 1 (102.096624ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-mftgr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-949994 describe pod metrics-server-9975d5f86-mftgr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.22s)

                                                
                                    

Test pass (299/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.78
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 8.92
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.1
18 TestDownloadOnly/v1.32.1/DeleteAll 0.22
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 271.91
29 TestAddons/serial/Volcano 39.02
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.9
35 TestAddons/parallel/Registry 15.27
36 TestAddons/parallel/Ingress 18.55
37 TestAddons/parallel/InspektorGadget 10.84
38 TestAddons/parallel/MetricsServer 6.82
40 TestAddons/parallel/CSI 48.81
41 TestAddons/parallel/Headlamp 16.58
42 TestAddons/parallel/CloudSpanner 5.85
43 TestAddons/parallel/LocalPath 52.14
44 TestAddons/parallel/NvidiaDevicePlugin 5.69
45 TestAddons/parallel/Yakd 11.84
47 TestAddons/StoppedEnableDisable 12.51
48 TestCertOptions 37.26
49 TestCertExpiration 221.79
51 TestForceSystemdFlag 34.9
52 TestForceSystemdEnv 43.66
53 TestDockerEnvContainerd 45.93
58 TestErrorSpam/setup 29.33
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 1.89
62 TestErrorSpam/unpause 1.81
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 48.86
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.34
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.29
75 TestFunctional/serial/CacheCmd/cache/add_local 1.32
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
83 TestFunctional/serial/ExtraConfig 47.15
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.76
86 TestFunctional/serial/LogsFileCmd 1.76
87 TestFunctional/serial/InvalidService 4.34
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 9.48
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.25
97 TestFunctional/parallel/ServiceCmdConnect 9.84
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 27
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 2.41
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 2.15
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
113 TestFunctional/parallel/License 0.36
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ServiceCmd/List 0.59
128 TestFunctional/parallel/ProfileCmd/profile_list 0.52
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
131 TestFunctional/parallel/MountCmd/any-port 8.73
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
133 TestFunctional/parallel/ServiceCmd/Format 0.4
134 TestFunctional/parallel/ServiceCmd/URL 0.53
135 TestFunctional/parallel/MountCmd/specific-port 2.47
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.51
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.35
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.95
144 TestFunctional/parallel/ImageCommands/Setup 0.75
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.68
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 117
162 TestMultiControlPlane/serial/DeployApp 35.71
163 TestMultiControlPlane/serial/PingHostFromPods 1.79
164 TestMultiControlPlane/serial/AddWorkerNode 22.76
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
167 TestMultiControlPlane/serial/CopyFile 19.25
168 TestMultiControlPlane/serial/StopSecondaryNode 12.85
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
170 TestMultiControlPlane/serial/RestartSecondaryNode 18.57
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 127.24
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.84
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
175 TestMultiControlPlane/serial/StopCluster 35.8
176 TestMultiControlPlane/serial/RestartCluster 63.14
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
178 TestMultiControlPlane/serial/AddSecondaryNode 45.22
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
183 TestJSONOutput/start/Command 51.9
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.77
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.7
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.82
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
208 TestKicCustomNetwork/create_custom_network 40.16
209 TestKicCustomNetwork/use_default_bridge_network 33.61
210 TestKicExistingNetwork 33.29
211 TestKicCustomSubnet 31.66
212 TestKicStaticIP 31.32
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 66.35
217 TestMountStart/serial/StartWithMountFirst 6.37
218 TestMountStart/serial/VerifyMountFirst 0.27
219 TestMountStart/serial/StartWithMountSecond 8.75
220 TestMountStart/serial/VerifyMountSecond 0.25
221 TestMountStart/serial/DeleteFirst 1.62
222 TestMountStart/serial/VerifyMountPostDelete 0.26
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.55
225 TestMountStart/serial/VerifyMountPostStop 0.26
228 TestMultiNode/serial/FreshStart2Nodes 66.89
229 TestMultiNode/serial/DeployApp2Nodes 20.62
230 TestMultiNode/serial/PingHostFrom2Pods 1.05
231 TestMultiNode/serial/AddNode 16.55
232 TestMultiNode/serial/MultiNodeLabels 0.11
233 TestMultiNode/serial/ProfileList 0.76
234 TestMultiNode/serial/CopyFile 10.57
235 TestMultiNode/serial/StopNode 2.29
236 TestMultiNode/serial/StartAfterStop 9.72
237 TestMultiNode/serial/RestartKeepsNodes 127.61
238 TestMultiNode/serial/DeleteNode 5.67
239 TestMultiNode/serial/StopMultiNode 23.87
240 TestMultiNode/serial/RestartMultiNode 53.02
241 TestMultiNode/serial/ValidateNameConflict 34.25
246 TestPreload 112.26
251 TestInsufficientStorage 11.71
252 TestRunningBinaryUpgrade 94.74
254 TestKubernetesUpgrade 107.24
255 TestMissingContainerUpgrade 179.55
257 TestPause/serial/Start 55.73
258 TestPause/serial/SecondStartNoReconfiguration 7.22
259 TestPause/serial/Pause 0.73
260 TestPause/serial/VerifyStatus 0.32
261 TestPause/serial/Unpause 0.65
262 TestPause/serial/PauseAgain 0.89
263 TestPause/serial/DeletePaused 2.51
264 TestPause/serial/VerifyDeletedResources 0.38
265 TestStoppedBinaryUpgrade/Setup 0.7
266 TestStoppedBinaryUpgrade/Upgrade 109.24
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
277 TestNoKubernetes/serial/StartWithK8s 37.51
285 TestNetworkPlugins/group/false 6.25
286 TestNoKubernetes/serial/StartWithStopK8s 18.89
290 TestNoKubernetes/serial/Start 9.11
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
292 TestNoKubernetes/serial/ProfileList 1.16
293 TestNoKubernetes/serial/Stop 1.24
294 TestNoKubernetes/serial/StartNoArgs 7.62
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
297 TestStartStop/group/old-k8s-version/serial/FirstStart 177.23
299 TestStartStop/group/no-preload/serial/FirstStart 66.86
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.65
301 TestStartStop/group/no-preload/serial/DeployApp 8.35
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.1
303 TestStartStop/group/old-k8s-version/serial/Stop 12.08
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
305 TestStartStop/group/no-preload/serial/Stop 12.01
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
309 TestStartStop/group/no-preload/serial/SecondStart 304.06
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
313 TestStartStop/group/no-preload/serial/Pause 3.14
315 TestStartStop/group/embed-certs/serial/FirstStart 87.36
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
319 TestStartStop/group/old-k8s-version/serial/Pause 3.12
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.21
322 TestStartStop/group/embed-certs/serial/DeployApp 8.49
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.58
324 TestStartStop/group/embed-certs/serial/Stop 12.16
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
326 TestStartStop/group/embed-certs/serial/SecondStart 266.1
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.34
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.17
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 298.8
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 3.07
337 TestStartStop/group/newest-cni/serial/FirstStart 32.86
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
340 TestStartStop/group/newest-cni/serial/Stop 1.26
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
342 TestStartStop/group/newest-cni/serial/SecondStart 16.97
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
346 TestStartStop/group/newest-cni/serial/Pause 3.26
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
348 TestNetworkPlugins/group/auto/Start 69.69
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.12
350 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
351 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.85
352 TestNetworkPlugins/group/kindnet/Start 87.07
353 TestNetworkPlugins/group/auto/KubeletFlags 0.3
354 TestNetworkPlugins/group/auto/NetCatPod 9.32
355 TestNetworkPlugins/group/auto/DNS 0.19
356 TestNetworkPlugins/group/auto/Localhost 0.16
357 TestNetworkPlugins/group/auto/HairPin 0.17
358 TestNetworkPlugins/group/calico/Start 84.88
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
361 TestNetworkPlugins/group/kindnet/NetCatPod 11.36
362 TestNetworkPlugins/group/kindnet/DNS 0.26
363 TestNetworkPlugins/group/kindnet/Localhost 0.26
364 TestNetworkPlugins/group/kindnet/HairPin 0.21
365 TestNetworkPlugins/group/custom-flannel/Start 51.97
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.3
368 TestNetworkPlugins/group/calico/NetCatPod 9.28
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
371 TestNetworkPlugins/group/calico/DNS 0.29
372 TestNetworkPlugins/group/calico/Localhost 0.22
373 TestNetworkPlugins/group/calico/HairPin 0.17
374 TestNetworkPlugins/group/custom-flannel/DNS 0.22
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
377 TestNetworkPlugins/group/enable-default-cni/Start 80.62
378 TestNetworkPlugins/group/flannel/Start 56.43
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
381 TestNetworkPlugins/group/flannel/NetCatPod 10.28
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
384 TestNetworkPlugins/group/flannel/DNS 0.26
385 TestNetworkPlugins/group/flannel/Localhost 0.19
386 TestNetworkPlugins/group/flannel/HairPin 0.19
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.33
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
390 TestNetworkPlugins/group/bridge/Start 38.55
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
392 TestNetworkPlugins/group/bridge/NetCatPod 8.28
393 TestNetworkPlugins/group/bridge/DNS 21.74
394 TestNetworkPlugins/group/bridge/Localhost 0.15
395 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-113434 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-113434 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.781573438s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 02:07:23.206376 3586800 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 02:07:23.206467 3586800 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-113434
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-113434: exit status 85 (95.089933ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-113434 | jenkins | v1.35.0 | 27 Jan 25 02:07 UTC |          |
	|         | -p download-only-113434        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 02:07:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:07:14.471332 3586805 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:07:14.471472 3586805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:07:14.471484 3586805 out.go:358] Setting ErrFile to fd 2...
	I0127 02:07:14.471490 3586805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:07:14.471848 3586805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	W0127 02:07:14.472010 3586805 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20316-3581420/.minikube/config/config.json: open /home/jenkins/minikube-integration/20316-3581420/.minikube/config/config.json: no such file or directory
	I0127 02:07:14.472478 3586805 out.go:352] Setting JSON to true
	I0127 02:07:14.474029 3586805 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":89378,"bootTime":1737854256,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:07:14.474134 3586805 start.go:139] virtualization:  
	I0127 02:07:14.478335 3586805 out.go:97] [download-only-113434] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0127 02:07:14.478513 3586805 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 02:07:14.478569 3586805 notify.go:220] Checking for updates...
	I0127 02:07:14.481480 3586805 out.go:169] MINIKUBE_LOCATION=20316
	I0127 02:07:14.484461 3586805 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:07:14.487303 3586805 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:07:14.490229 3586805 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:07:14.493196 3586805 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 02:07:14.498891 3586805 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 02:07:14.499153 3586805 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:07:14.520892 3586805 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:07:14.520998 3586805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:07:14.579366 3586805 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 02:07:14.570392822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:07:14.579478 3586805 docker.go:318] overlay module found
	I0127 02:07:14.582465 3586805 out.go:97] Using the docker driver based on user configuration
	I0127 02:07:14.582496 3586805 start.go:297] selected driver: docker
	I0127 02:07:14.582503 3586805 start.go:901] validating driver "docker" against <nil>
	I0127 02:07:14.582604 3586805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:07:14.634471 3586805 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 02:07:14.625303191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:07:14.634688 3586805 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 02:07:14.634978 3586805 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 02:07:14.635132 3586805 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 02:07:14.638230 3586805 out.go:169] Using Docker driver with root privileges
	I0127 02:07:14.641102 3586805 cni.go:84] Creating CNI manager for ""
	I0127 02:07:14.641160 3586805 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:07:14.641175 3586805 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 02:07:14.641261 3586805 start.go:340] cluster config:
	{Name:download-only-113434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-113434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:07:14.644271 3586805 out.go:97] Starting "download-only-113434" primary control-plane node in "download-only-113434" cluster
	I0127 02:07:14.644298 3586805 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 02:07:14.647072 3586805 out.go:97] Pulling base image v0.0.46 ...
	I0127 02:07:14.647098 3586805 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 02:07:14.647262 3586805 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 02:07:14.664265 3586805 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 02:07:14.665076 3586805 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 02:07:14.665192 3586805 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 02:07:14.712301 3586805 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 02:07:14.712324 3586805 cache.go:56] Caching tarball of preloaded images
	I0127 02:07:14.713123 3586805 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 02:07:14.716413 3586805 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 02:07:14.716450 3586805 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0127 02:07:14.801892 3586805 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 02:07:19.357110 3586805 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0127 02:07:19.357208 3586805 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-113434 host does not exist
	  To start a cluster, run: "minikube start -p download-only-113434"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-113434
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (8.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-908077 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-908077 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.919348196s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (8.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 02:07:32.578594 3586800 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 02:07:32.578633 3586800 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-908077
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-908077: exit status 85 (99.344585ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-113434 | jenkins | v1.35.0 | 27 Jan 25 02:07 UTC |                     |
	|         | -p download-only-113434        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 02:07 UTC | 27 Jan 25 02:07 UTC |
	| delete  | -p download-only-113434        | download-only-113434 | jenkins | v1.35.0 | 27 Jan 25 02:07 UTC | 27 Jan 25 02:07 UTC |
	| start   | -o=json --download-only        | download-only-908077 | jenkins | v1.35.0 | 27 Jan 25 02:07 UTC |                     |
	|         | -p download-only-908077        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 02:07:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 02:07:23.705144 3587007 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:07:23.705263 3587007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:07:23.705273 3587007 out.go:358] Setting ErrFile to fd 2...
	I0127 02:07:23.705279 3587007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:07:23.705538 3587007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:07:23.705935 3587007 out.go:352] Setting JSON to true
	I0127 02:07:23.706948 3587007 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":89388,"bootTime":1737854256,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:07:23.707025 3587007 start.go:139] virtualization:  
	I0127 02:07:23.710513 3587007 out.go:97] [download-only-908077] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 02:07:23.710821 3587007 notify.go:220] Checking for updates...
	I0127 02:07:23.713731 3587007 out.go:169] MINIKUBE_LOCATION=20316
	I0127 02:07:23.716643 3587007 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:07:23.719607 3587007 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:07:23.722494 3587007 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:07:23.725464 3587007 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 02:07:23.731081 3587007 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 02:07:23.731397 3587007 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:07:23.763706 3587007 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:07:23.763825 3587007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:07:23.818786 3587007 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 02:07:23.809336668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:07:23.818904 3587007 docker.go:318] overlay module found
	I0127 02:07:23.821991 3587007 out.go:97] Using the docker driver based on user configuration
	I0127 02:07:23.822017 3587007 start.go:297] selected driver: docker
	I0127 02:07:23.822024 3587007 start.go:901] validating driver "docker" against <nil>
	I0127 02:07:23.822163 3587007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:07:23.874544 3587007 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 02:07:23.864921087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:07:23.874769 3587007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 02:07:23.875072 3587007 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 02:07:23.875267 3587007 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 02:07:23.878434 3587007 out.go:169] Using Docker driver with root privileges
	I0127 02:07:23.881331 3587007 cni.go:84] Creating CNI manager for ""
	I0127 02:07:23.881407 3587007 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 02:07:23.881419 3587007 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 02:07:23.881508 3587007 start.go:340] cluster config:
	{Name:download-only-908077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-908077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:07:23.884581 3587007 out.go:97] Starting "download-only-908077" primary control-plane node in "download-only-908077" cluster
	I0127 02:07:23.884610 3587007 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 02:07:23.887505 3587007 out.go:97] Pulling base image v0.0.46 ...
	I0127 02:07:23.887535 3587007 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:07:23.887652 3587007 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 02:07:23.905941 3587007 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 02:07:23.906076 3587007 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 02:07:23.906135 3587007 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0127 02:07:23.906142 3587007 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0127 02:07:23.906151 3587007 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0127 02:07:23.944959 3587007 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 02:07:23.945000 3587007 cache.go:56] Caching tarball of preloaded images
	I0127 02:07:23.945888 3587007 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:07:23.948961 3587007 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 02:07:23.949001 3587007 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0127 02:07:24.055863 3587007 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:3dfa1a6dfbdb6fd11337c34d558e517e -> /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 02:07:27.249621 3587007 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0127 02:07:27.249762 3587007 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0127 02:07:28.183609 3587007 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 02:07:28.184020 3587007 profile.go:143] Saving config to /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/download-only-908077/config.json ...
	I0127 02:07:28.184059 3587007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/download-only-908077/config.json: {Name:mk4c1a30d43610d296a53a9cd873346eea316633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 02:07:28.184252 3587007 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 02:07:28.185057 3587007 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20316-3581420/.minikube/cache/linux/arm64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-908077 host does not exist
	  To start a cluster, run: "minikube start -p download-only-908077"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-908077
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 02:07:33.913068 3586800 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-686390 --alsologtostderr --binary-mirror http://127.0.0.1:38687 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-686390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-686390
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-791589
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-791589: exit status 85 (76.149611ms)

                                                
                                                
-- stdout --
	* Profile "addons-791589" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-791589"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-791589
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-791589: exit status 85 (82.585138ms)

                                                
                                                
-- stdout --
	* Profile "addons-791589" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-791589"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (271.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-791589 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-791589 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m31.907697714s)
--- PASS: TestAddons/Setup (271.91s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.02s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 62.038075ms
addons_test.go:823: volcano-controller stabilized in 62.454622ms
addons_test.go:815: volcano-admission stabilized in 62.781908ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-tjxt7" [52085aa5-e5fc-4bbd-9ea4-db6a1a844086] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003473034s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-zglx8" [18a225fc-746f-4b08-8e0e-6a5a85f888fb] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004173361s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-zz8jv" [d0c94c86-ce23-4563-9dd6-76827972bd7d] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003664652s
addons_test.go:842: (dbg) Run:  kubectl --context addons-791589 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-791589 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-791589 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ba760038-d19e-4e85-9c84-2500b08f1d81] Pending
helpers_test.go:344: "test-job-nginx-0" [ba760038-d19e-4e85-9c84-2500b08f1d81] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ba760038-d19e-4e85-9c84-2500b08f1d81] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004363889s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable volcano --alsologtostderr -v=1: (11.355535829s)
--- PASS: TestAddons/serial/Volcano (39.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-791589 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-791589 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-791589 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-791589 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2eb333bb-5db4-4478-8409-10a8183a1560] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2eb333bb-5db4-4478-8409-10a8183a1560] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003768315s
addons_test.go:633: (dbg) Run:  kubectl --context addons-791589 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-791589 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-791589 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-791589 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.550689ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-zhtts" [3d14470f-2bcd-472f-a678-35771d87d2ae] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003255209s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b6k46" [f74e14d8-07b3-4649-8716-d675e1cf113b] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003635445s
addons_test.go:331: (dbg) Run:  kubectl --context addons-791589 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-791589 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-791589 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.249027638s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 ip
2025/01/27 02:13:19 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.27s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-791589 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-791589 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-791589 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [af90bdaf-1d34-4b5e-b2c4-ea53f6414b77] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [af90bdaf-1d34-4b5e-b2c4-ea53f6414b77] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004413063s
I0127 02:14:38.426216 3586800 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-791589 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable ingress-dns --alsologtostderr -v=1: (1.019738057s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable ingress --alsologtostderr -v=1: (7.840235356s)
--- PASS: TestAddons/parallel/Ingress (18.55s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6z9q5" [98607f64-c948-42a1-a8ca-804a1727b13c] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004409842s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable inspektor-gadget --alsologtostderr -v=1: (5.831968491s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.740055ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-6569m" [b51fabd9-8d37-4dd8-ba75-1ae918b12bcf] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004020072s
addons_test.go:402: (dbg) Run:  kubectl --context addons-791589 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 02:13:44.706751 3586800 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 02:13:44.711914 3586800 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 02:13:44.711943 3586800 kapi.go:107] duration metric: took 9.106647ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.116354ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-791589 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-791589 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e051f2d0-a6a3-4433-923c-8819fb7efaa3] Pending
helpers_test.go:344: "task-pv-pod" [e051f2d0-a6a3-4433-923c-8819fb7efaa3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e051f2d0-a6a3-4433-923c-8819fb7efaa3] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006453446s
addons_test.go:511: (dbg) Run:  kubectl --context addons-791589 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-791589 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-791589 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-791589 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-791589 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-791589 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-791589 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9b356579-68b3-421d-9efe-24c24a58779a] Pending
helpers_test.go:344: "task-pv-pod-restore" [9b356579-68b3-421d-9efe-24c24a58779a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003782117s
addons_test.go:553: (dbg) Run:  kubectl --context addons-791589 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-791589 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-791589 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.838906056s)
--- PASS: TestAddons/parallel/CSI (48.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-791589 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-791589 --alsologtostderr -v=1: (1.593252493s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-tr8kk" [bda91115-665f-4ff1-99f3-4f8aaccb7fa2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-tr8kk" [bda91115-665f-4ff1-99f3-4f8aaccb7fa2] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003761268s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable headlamp --alsologtostderr -v=1: (5.976977221s)
--- PASS: TestAddons/parallel/Headlamp (16.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-9ctvd" [715c8b7a-dcc6-4124-aa0a-7fc7887bd6e7] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004061296s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-791589 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-791589 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-791589 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3e6af180-b621-46ec-9dd8-ac6725f72c31] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3e6af180-b621-46ec-9dd8-ac6725f72c31] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3e6af180-b621-46ec-9dd8-ac6725f72c31] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003723669s
addons_test.go:906: (dbg) Run:  kubectl --context addons-791589 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 ssh "cat /opt/local-path-provisioner/pvc-4f489d0a-7360-431e-8cc2-000cfb2bb9cc_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-791589 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-791589 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.664734832s)
--- PASS: TestAddons/parallel/LocalPath (52.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q7m8m" [b35e3571-89c0-4c0d-aabf-c6cc6a0c6583] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0047237s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-nz7fk" [99264375-e211-4901-a052-e3ba9dab473a] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003807269s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-791589 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-791589 addons disable yakd --alsologtostderr -v=1: (5.834799859s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-791589
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-791589: (12.176184216s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-791589
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-791589
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-791589
--- PASS: TestAddons/StoppedEnableDisable (12.51s)

                                                
                                    
x
+
TestCertOptions (37.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-703948 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-703948 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.181154385s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-703948 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-703948 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-703948 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-703948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-703948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-703948: (2.149227465s)
--- PASS: TestCertOptions (37.26s)

                                                
                                    
x
+
TestCertExpiration (221.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-393434 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-393434 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (33.128576855s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-393434 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-393434 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.363063721s)
helpers_test.go:175: Cleaning up "cert-expiration-393434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-393434
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-393434: (2.296891415s)
--- PASS: TestCertExpiration (221.79s)

                                                
                                    
x
+
TestForceSystemdFlag (34.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-458112 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-458112 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.499469424s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-458112 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-458112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-458112
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-458112: (2.048992768s)
--- PASS: TestForceSystemdFlag (34.90s)

                                                
                                    
x
+
TestForceSystemdEnv (43.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-022738 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-022738 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.615963774s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-022738 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-022738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-022738
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-022738: (2.562201672s)
--- PASS: TestForceSystemdEnv (43.66s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.93s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-020212 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-020212 --driver=docker  --container-runtime=containerd: (30.3471431s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-020212"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iMgrVJDgANtt/agent.3609260" SSH_AGENT_PID="3609261" DOCKER_HOST=ssh://docker@127.0.0.1:37490 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iMgrVJDgANtt/agent.3609260" SSH_AGENT_PID="3609261" DOCKER_HOST=ssh://docker@127.0.0.1:37490 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iMgrVJDgANtt/agent.3609260" SSH_AGENT_PID="3609261" DOCKER_HOST=ssh://docker@127.0.0.1:37490 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.251014363s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-iMgrVJDgANtt/agent.3609260" SSH_AGENT_PID="3609261" DOCKER_HOST=ssh://docker@127.0.0.1:37490 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-020212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-020212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-020212: (1.957163601s)
--- PASS: TestDockerEnvContainerd (45.93s)

                                                
                                    
x
+
TestErrorSpam/setup (29.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-364031 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-364031 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-364031 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-364031 --driver=docker  --container-runtime=containerd: (29.33165076s)
--- PASS: TestErrorSpam/setup (29.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 stop: (1.286091435s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-364031 --log_dir /tmp/nospam-364031 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20316-3581420/.minikube/files/etc/test/nested/copy/3586800/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-368775 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0127 02:17:06.566174 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:06.572602 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:06.583956 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:06.605386 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:06.646745 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:06.728060 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:06.889551 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:07.211316 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:07.853390 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:09.134740 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:11.696610 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:17:16.817983 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-368775 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.855063675s)
--- PASS: TestFunctional/serial/StartWithProxy (48.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.34s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 02:17:22.643436 3586800 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-368775 --alsologtostderr -v=8
E0127 02:17:27.060112 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-368775 --alsologtostderr -v=8: (6.332931801s)
functional_test.go:663: soft start took 6.337531405s for "functional-368775" cluster.
I0127 02:17:28.976925 3586800 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (6.34s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-368775 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 cache add registry.k8s.io/pause:3.1: (1.594968745s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 cache add registry.k8s.io/pause:3.3: (1.409845086s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 cache add registry.k8s.io/pause:latest: (1.282592808s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-368775 /tmp/TestFunctionalserialCacheCmdcacheadd_local2003492741/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cache add minikube-local-cache-test:functional-368775
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cache delete minikube-local-cache-test:functional-368775
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-368775
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.564271ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 cache reload: (1.193102723s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 kubectl -- --context functional-368775 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-368775 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-368775 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 02:17:47.541463 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-368775 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.149363928s)
functional_test.go:761: restart took 47.149486255s for "functional-368775" cluster.
I0127 02:18:24.930881 3586800 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (47.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-368775 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 logs: (1.754832865s)
--- PASS: TestFunctional/serial/LogsCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 logs --file /tmp/TestFunctionalserialLogsFileCmd1825655426/001/logs.txt
E0127 02:18:28.503388 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 logs --file /tmp/TestFunctionalserialLogsFileCmd1825655426/001/logs.txt: (1.762837724s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-368775 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-368775
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-368775: exit status 115 (626.45183ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30450 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-368775 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 config get cpus: exit status 14 (73.246094ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 config get cpus: exit status 14 (111.684898ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-368775 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-368775 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3624286: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-368775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-368775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (208.809734ms)

                                                
                                                
-- stdout --
	* [functional-368775] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:19:06.885354 3623978 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:19:06.885476 3623978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:19:06.885487 3623978 out.go:358] Setting ErrFile to fd 2...
	I0127 02:19:06.885492 3623978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:19:06.885735 3623978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:19:06.886083 3623978 out.go:352] Setting JSON to false
	I0127 02:19:06.887146 3623978 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":90091,"bootTime":1737854256,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:19:06.887214 3623978 start.go:139] virtualization:  
	I0127 02:19:06.890814 3623978 out.go:177] * [functional-368775] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 02:19:06.894156 3623978 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:19:06.894327 3623978 notify.go:220] Checking for updates...
	I0127 02:19:06.899983 3623978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:19:06.902749 3623978 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:19:06.905549 3623978 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:19:06.908995 3623978 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 02:19:06.911884 3623978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:19:06.915298 3623978 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:19:06.915828 3623978 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:19:06.949336 3623978 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:19:06.949489 3623978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:19:07.020543 3623978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 02:19:07.011490321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:19:07.020662 3623978 docker.go:318] overlay module found
	I0127 02:19:07.023592 3623978 out.go:177] * Using the docker driver based on existing profile
	I0127 02:19:07.026429 3623978 start.go:297] selected driver: docker
	I0127 02:19:07.026450 3623978 start.go:901] validating driver "docker" against &{Name:functional-368775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-368775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:19:07.026566 3623978 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:19:07.030237 3623978 out.go:201] 
	W0127 02:19:07.033135 3623978 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 02:19:07.035949 3623978 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-368775 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-368775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-368775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (216.885827ms)

                                                
                                                
-- stdout --
	* [functional-368775] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:19:06.679254 3623934 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:19:06.679434 3623934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:19:06.679448 3623934 out.go:358] Setting ErrFile to fd 2...
	I0127 02:19:06.679454 3623934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:19:06.679919 3623934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:19:06.681034 3623934 out.go:352] Setting JSON to false
	I0127 02:19:06.682187 3623934 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":90091,"bootTime":1737854256,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:19:06.682281 3623934 start.go:139] virtualization:  
	I0127 02:19:06.685871 3623934 out.go:177] * [functional-368775] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0127 02:19:06.689685 3623934 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:19:06.689802 3623934 notify.go:220] Checking for updates...
	I0127 02:19:06.695325 3623934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:19:06.698287 3623934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:19:06.701081 3623934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:19:06.703963 3623934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 02:19:06.706751 3623934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:19:06.709946 3623934 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:19:06.710589 3623934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:19:06.738786 3623934 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:19:06.738913 3623934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:19:06.809469 3623934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 02:19:06.798981128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:19:06.809592 3623934 docker.go:318] overlay module found
	I0127 02:19:06.812725 3623934 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0127 02:19:06.815559 3623934 start.go:297] selected driver: docker
	I0127 02:19:06.815589 3623934 start.go:901] validating driver "docker" against &{Name:functional-368775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-368775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 02:19:06.815736 3623934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:19:06.819674 3623934 out.go:201] 
	W0127 02:19:06.822577 3623934 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 02:19:06.826528 3623934 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-368775 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-368775 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-vqktl" [aa5e58c6-0117-4f7c-ab9f-dd26a642d59d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-vqktl" [aa5e58c6-0117-4f7c-ab9f-dd26a642d59d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.015437572s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31773
functional_test.go:1675: http://192.168.49.2:31773: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-vqktl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31773
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [716a686b-14bb-46e7-9216-201dcb11e37c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003540544s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-368775 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-368775 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-368775 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-368775 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e077ae13-d2e0-439d-b033-1857c271fdb4] Pending
helpers_test.go:344: "sp-pod" [e077ae13-d2e0-439d-b033-1857c271fdb4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e077ae13-d2e0-439d-b033-1857c271fdb4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003092111s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-368775 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-368775 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-368775 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ec882aa-1d44-405c-85d1-cfe58d7b46ec] Pending
helpers_test.go:344: "sp-pod" [8ec882aa-1d44-405c-85d1-cfe58d7b46ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ec882aa-1d44-405c-85d1-cfe58d7b46ec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005103375s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-368775 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh -n functional-368775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cp functional-368775:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1866593268/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh -n functional-368775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh -n functional-368775 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/3586800/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo cat /etc/test/nested/copy/3586800/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/3586800.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo cat /etc/ssl/certs/3586800.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/3586800.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo cat /usr/share/ca-certificates/3586800.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/35868002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo cat /etc/ssl/certs/35868002.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/35868002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo cat /usr/share/ca-certificates/35868002.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-368775 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh "sudo systemctl is-active docker": exit status 1 (404.044842ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh "sudo systemctl is-active crio": exit status 1 (274.503095ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-368775 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-368775 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-368775 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-368775 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3621519: os: process already finished
helpers_test.go:502: unable to terminate pid 3621297: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-368775 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-368775 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a59bc8cf-e076-4906-8255-c1145d79b7b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a59bc8cf-e076-4906-8255-c1145d79b7b7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004025426s
I0127 02:18:44.219349 3586800 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-368775 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.138.68 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-368775 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-368775 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-368775 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-p5hrs" [075fcde1-857b-47e8-b44e-a52ff4976db9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-p5hrs" [075fcde1-857b-47e8-b44e-a52ff4976db9] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003416494s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "466.67172ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "55.519286ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "425.845924ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "84.778846ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 service list -o json
functional_test.go:1494: Took "626.57822ms" to run "out/minikube-linux-arm64 -p functional-368775 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdany-port3516373946/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737944343779954898" to /tmp/TestFunctionalparallelMountCmdany-port3516373946/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737944343779954898" to /tmp/TestFunctionalparallelMountCmdany-port3516373946/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737944343779954898" to /tmp/TestFunctionalparallelMountCmdany-port3516373946/001/test-1737944343779954898
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (429.412119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 02:19:04.211590 3586800 retry.go:31] will retry after 493.088606ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 02:19 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 02:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 02:19 test-1737944343779954898
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh cat /mount-9p/test-1737944343779954898
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-368775 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0a7accf4-b06b-4a26-ad4e-d6dc2220d0d1] Pending
helpers_test.go:344: "busybox-mount" [0a7accf4-b06b-4a26-ad4e-d6dc2220d0d1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0a7accf4-b06b-4a26-ad4e-d6dc2220d0d1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0a7accf4-b06b-4a26-ad4e-d6dc2220d0d1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003785169s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-368775 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdany-port3516373946/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30160
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30160
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdspecific-port704712666/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (537.418843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 02:19:13.047816 3586800 retry.go:31] will retry after 667.554405ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdspecific-port704712666/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh "sudo umount -f /mount-9p": exit status 1 (343.387676ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-368775 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdspecific-port704712666/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2044399712/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2044399712/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2044399712/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T" /mount1: exit status 1 (798.866345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 02:19:15.782647 3586800 retry.go:31] will retry after 636.956504ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T" /mount1
2025/01/27 02:19:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-368775 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2044399712/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2044399712/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-368775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2044399712/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 version -o=json --components: (1.353481615s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-368775 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-368775
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-368775
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-368775 image ls --format short --alsologtostderr:
I0127 02:19:24.794972 3626901 out.go:345] Setting OutFile to fd 1 ...
I0127 02:19:24.795207 3626901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:24.795235 3626901 out.go:358] Setting ErrFile to fd 2...
I0127 02:19:24.795252 3626901 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:24.795521 3626901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
I0127 02:19:24.796239 3626901 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:24.796416 3626901 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:24.796924 3626901 cli_runner.go:164] Run: docker container inspect functional-368775 --format={{.State.Status}}
I0127 02:19:24.816289 3626901 ssh_runner.go:195] Run: systemctl --version
I0127 02:19:24.816340 3626901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368775
I0127 02:19:24.836826 3626901 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37500 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/functional-368775/id_rsa Username:docker}
I0127 02:19:24.935420 3626901 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-368775 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | alpine             | sha256:f9d642 | 21.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:265c2d | 26.2MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:293376 | 24MB   |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:ddb38c | 18.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e124fb | 27.4MB |
| docker.io/kicbase/echo-server               | functional-368775  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-368775  | sha256:6f66a1 | 991B   |
| docker.io/library/nginx                     | latest             | sha256:781d90 | 68.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-368775 image ls --format table --alsologtostderr:
I0127 02:19:25.389221 3627054 out.go:345] Setting OutFile to fd 1 ...
I0127 02:19:25.389427 3627054 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:25.389449 3627054 out.go:358] Setting ErrFile to fd 2...
I0127 02:19:25.389470 3627054 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:25.389746 3627054 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
I0127 02:19:25.390524 3627054 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:25.390678 3627054 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:25.391195 3627054 cli_runner.go:164] Run: docker container inspect functional-368775 --format={{.State.Status}}
I0127 02:19:25.420477 3627054 ssh_runner.go:195] Run: systemctl --version
I0127 02:19:25.420556 3627054 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368775
I0127 02:19:25.444638 3627054 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37500 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/functional-368775/id_rsa Username:docker}
I0127 02:19:25.536186 3627054 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-368775 image ls --format json --alsologtostderr:
[{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-368775"],"size":"2173567"},{"id":"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"23968433"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:6f66a16c36835ddc041ce602f0b2929cf1f395df965e30fd8d9a589ffa1e4b36","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-368775"],"size":"991"},{"id":"sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDige
sts":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21565101"},{"id":"sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"68507108"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c
7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"},{"id":"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"26217748"},{"id":"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-
proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"27363416"},{"id":"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"18922457"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-368775 image ls --format json --alsologtostderr:
I0127 02:19:25.101987 3626969 out.go:345] Setting OutFile to fd 1 ...
I0127 02:19:25.102239 3626969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:25.102255 3626969 out.go:358] Setting ErrFile to fd 2...
I0127 02:19:25.102262 3626969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:25.102579 3626969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
I0127 02:19:25.103340 3626969 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:25.103511 3626969 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:25.104049 3626969 cli_runner.go:164] Run: docker container inspect functional-368775 --format={{.State.Status}}
I0127 02:19:25.128704 3626969 ssh_runner.go:195] Run: systemctl --version
I0127 02:19:25.128759 3626969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368775
I0127 02:19:25.154443 3626969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37500 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/functional-368775/id_rsa Username:docker}
I0127 02:19:25.250833 3626969 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-368775 image ls --format yaml --alsologtostderr:
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "26217748"
- id: sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "27363416"
- id: sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "18922457"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-368775
size: "2173567"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "23968433"
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:6f66a16c36835ddc041ce602f0b2929cf1f395df965e30fd8d9a589ffa1e4b36
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-368775
size: "991"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "21565101"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "68507108"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-368775 image ls --format yaml --alsologtostderr:
I0127 02:19:24.797861 3626902 out.go:345] Setting OutFile to fd 1 ...
I0127 02:19:24.798031 3626902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:24.798043 3626902 out.go:358] Setting ErrFile to fd 2...
I0127 02:19:24.798049 3626902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:24.798330 3626902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
I0127 02:19:24.799015 3626902 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:24.799147 3626902 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:24.799626 3626902 cli_runner.go:164] Run: docker container inspect functional-368775 --format={{.State.Status}}
I0127 02:19:24.828159 3626902 ssh_runner.go:195] Run: systemctl --version
I0127 02:19:24.828222 3626902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368775
I0127 02:19:24.848823 3626902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37500 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/functional-368775/id_rsa Username:docker}
I0127 02:19:24.939773 3626902 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-368775 ssh pgrep buildkitd: exit status 1 (366.790582ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image build -t localhost/my-image:functional-368775 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 image build -t localhost/my-image:functional-368775 testdata/build --alsologtostderr: (3.340552891s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-368775 image build -t localhost/my-image:functional-368775 testdata/build --alsologtostderr:
I0127 02:19:25.436503 3627059 out.go:345] Setting OutFile to fd 1 ...
I0127 02:19:25.438074 3627059 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:25.438135 3627059 out.go:358] Setting ErrFile to fd 2...
I0127 02:19:25.438144 3627059 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 02:19:25.438476 3627059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
I0127 02:19:25.439295 3627059 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:25.441366 3627059 config.go:182] Loaded profile config "functional-368775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 02:19:25.442124 3627059 cli_runner.go:164] Run: docker container inspect functional-368775 --format={{.State.Status}}
I0127 02:19:25.468044 3627059 ssh_runner.go:195] Run: systemctl --version
I0127 02:19:25.468102 3627059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-368775
I0127 02:19:25.495252 3627059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37500 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/functional-368775/id_rsa Username:docker}
I0127 02:19:25.585843 3627059 build_images.go:161] Building image from path: /tmp/build.3076176931.tar
I0127 02:19:25.585922 3627059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 02:19:25.597942 3627059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3076176931.tar
I0127 02:19:25.601398 3627059 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3076176931.tar: stat -c "%s %y" /var/lib/minikube/build/build.3076176931.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3076176931.tar': No such file or directory
I0127 02:19:25.601428 3627059 ssh_runner.go:362] scp /tmp/build.3076176931.tar --> /var/lib/minikube/build/build.3076176931.tar (3072 bytes)
I0127 02:19:25.626024 3627059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3076176931
I0127 02:19:25.637455 3627059 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3076176931 -xf /var/lib/minikube/build/build.3076176931.tar
I0127 02:19:25.647071 3627059 containerd.go:394] Building image: /var/lib/minikube/build/build.3076176931
I0127 02:19:25.647145 3627059 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3076176931 --local dockerfile=/var/lib/minikube/build/build.3076176931 --output type=image,name=localhost/my-image:functional-368775
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e9c2fe11ef80bdfc4556cc91edafd5a9225f32fda360421294b864af6ebd65f1 0.0s done
#8 exporting config sha256:e2e00892cb1bdeac0ad18eb43b142d5bcaa6facb72e98418539df80a0679ae65 0.0s done
#8 naming to localhost/my-image:functional-368775 done
#8 DONE 0.2s
I0127 02:19:28.668033 3627059 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3076176931 --local dockerfile=/var/lib/minikube/build/build.3076176931 --output type=image,name=localhost/my-image:functional-368775: (3.020850072s)
I0127 02:19:28.668105 3627059 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3076176931
I0127 02:19:28.679307 3627059 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3076176931.tar
I0127 02:19:28.689621 3627059 build_images.go:217] Built localhost/my-image:functional-368775 from /tmp/build.3076176931.tar
I0127 02:19:28.689662 3627059 build_images.go:133] succeeded building to: functional-368775
I0127 02:19:28.689668 3627059 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-368775
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image load --daemon kicbase/echo-server:functional-368775 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 image load --daemon kicbase/echo-server:functional-368775 --alsologtostderr: (1.212418439s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image load --daemon kicbase/echo-server:functional-368775 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 image load --daemon kicbase/echo-server:functional-368775 --alsologtostderr: (1.073703421s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-368775
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image load --daemon kicbase/echo-server:functional-368775 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-368775 image load --daemon kicbase/echo-server:functional-368775 --alsologtostderr: (1.161904588s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image save kicbase/echo-server:functional-368775 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image rm kicbase/echo-server:functional-368775 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-368775
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-368775 image save --daemon kicbase/echo-server:functional-368775 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-368775
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-368775
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-368775
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-368775
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (117s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-754161 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 02:19:50.426284 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-754161 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m56.142059129s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (117.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (35.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-754161 -- rollout status deployment/busybox: (32.522714201s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-6vfw9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-d2b9t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-rb5wt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-6vfw9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-d2b9t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-rb5wt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-6vfw9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-d2b9t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-rb5wt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (35.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-6vfw9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-6vfw9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-d2b9t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-d2b9t -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-rb5wt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-754161 -- exec busybox-58667487b6-rb5wt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-754161 -v=7 --alsologtostderr
E0127 02:22:06.565100 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-754161 -v=7 --alsologtostderr: (21.754212572s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr: (1.002479686s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-754161 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.0431643s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp testdata/cp-test.txt ha-754161:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile655000387/001/cp-test_ha-754161.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161:/home/docker/cp-test.txt ha-754161-m02:/home/docker/cp-test_ha-754161_ha-754161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test_ha-754161_ha-754161-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161:/home/docker/cp-test.txt ha-754161-m03:/home/docker/cp-test_ha-754161_ha-754161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test.txt"
E0127 02:22:34.268463 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test_ha-754161_ha-754161-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161:/home/docker/cp-test.txt ha-754161-m04:/home/docker/cp-test_ha-754161_ha-754161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test_ha-754161_ha-754161-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp testdata/cp-test.txt ha-754161-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile655000387/001/cp-test_ha-754161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m02:/home/docker/cp-test.txt ha-754161:/home/docker/cp-test_ha-754161-m02_ha-754161.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test_ha-754161-m02_ha-754161.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m02:/home/docker/cp-test.txt ha-754161-m03:/home/docker/cp-test_ha-754161-m02_ha-754161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test_ha-754161-m02_ha-754161-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m02:/home/docker/cp-test.txt ha-754161-m04:/home/docker/cp-test_ha-754161-m02_ha-754161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test_ha-754161-m02_ha-754161-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp testdata/cp-test.txt ha-754161-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile655000387/001/cp-test_ha-754161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m03:/home/docker/cp-test.txt ha-754161:/home/docker/cp-test_ha-754161-m03_ha-754161.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test_ha-754161-m03_ha-754161.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m03:/home/docker/cp-test.txt ha-754161-m02:/home/docker/cp-test_ha-754161-m03_ha-754161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test_ha-754161-m03_ha-754161-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m03:/home/docker/cp-test.txt ha-754161-m04:/home/docker/cp-test_ha-754161-m03_ha-754161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test_ha-754161-m03_ha-754161-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp testdata/cp-test.txt ha-754161-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile655000387/001/cp-test_ha-754161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m04:/home/docker/cp-test.txt ha-754161:/home/docker/cp-test_ha-754161-m04_ha-754161.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161 "sudo cat /home/docker/cp-test_ha-754161-m04_ha-754161.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m04:/home/docker/cp-test.txt ha-754161-m02:/home/docker/cp-test_ha-754161-m04_ha-754161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m02 "sudo cat /home/docker/cp-test_ha-754161-m04_ha-754161-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 cp ha-754161-m04:/home/docker/cp-test.txt ha-754161-m03:/home/docker/cp-test_ha-754161-m04_ha-754161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 ssh -n ha-754161-m03 "sudo cat /home/docker/cp-test_ha-754161-m04_ha-754161-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-754161 node stop m02 -v=7 --alsologtostderr: (12.052284713s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr: exit status 7 (799.568798ms)

                                                
                                                
-- stdout --
	ha-754161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-754161-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-754161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-754161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:23:01.599579 3643486 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:23:01.599759 3643486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:23:01.599775 3643486 out.go:358] Setting ErrFile to fd 2...
	I0127 02:23:01.599783 3643486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:23:01.600081 3643486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:23:01.600306 3643486 out.go:352] Setting JSON to false
	I0127 02:23:01.600363 3643486 mustload.go:65] Loading cluster: ha-754161
	I0127 02:23:01.600462 3643486 notify.go:220] Checking for updates...
	I0127 02:23:01.600862 3643486 config.go:182] Loaded profile config "ha-754161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:23:01.600891 3643486 status.go:174] checking status of ha-754161 ...
	I0127 02:23:01.601549 3643486 cli_runner.go:164] Run: docker container inspect ha-754161 --format={{.State.Status}}
	I0127 02:23:01.638663 3643486 status.go:371] ha-754161 host status = "Running" (err=<nil>)
	I0127 02:23:01.638751 3643486 host.go:66] Checking if "ha-754161" exists ...
	I0127 02:23:01.639532 3643486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-754161
	I0127 02:23:01.668342 3643486 host.go:66] Checking if "ha-754161" exists ...
	I0127 02:23:01.668680 3643486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:23:01.668722 3643486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-754161
	I0127 02:23:01.693408 3643486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37505 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/ha-754161/id_rsa Username:docker}
	I0127 02:23:01.784772 3643486 ssh_runner.go:195] Run: systemctl --version
	I0127 02:23:01.790076 3643486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:23:01.804435 3643486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:23:01.873528 3643486 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-27 02:23:01.863182614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:23:01.874506 3643486 kubeconfig.go:125] found "ha-754161" server: "https://192.168.49.254:8443"
	I0127 02:23:01.874549 3643486 api_server.go:166] Checking apiserver status ...
	I0127 02:23:01.874604 3643486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:23:01.886735 3643486 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1498/cgroup
	I0127 02:23:01.897213 3643486 api_server.go:182] apiserver freezer: "9:freezer:/docker/a020c5fa34c1dad8ec4f42cb9030b9f7a9bb2635df9f65ae0d72ebcb27684de0/kubepods/burstable/pod19e093502a1b6bbd961734ecc83922e9/2436759c2fc2f50108d40b0c81b6f961b652d63c9d9cd2ae5f3fd1373b8ebe77"
	I0127 02:23:01.897289 3643486 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a020c5fa34c1dad8ec4f42cb9030b9f7a9bb2635df9f65ae0d72ebcb27684de0/kubepods/burstable/pod19e093502a1b6bbd961734ecc83922e9/2436759c2fc2f50108d40b0c81b6f961b652d63c9d9cd2ae5f3fd1373b8ebe77/freezer.state
	I0127 02:23:01.906511 3643486 api_server.go:204] freezer state: "THAWED"
	I0127 02:23:01.906554 3643486 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 02:23:01.914823 3643486 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 02:23:01.914857 3643486 status.go:463] ha-754161 apiserver status = Running (err=<nil>)
	I0127 02:23:01.914869 3643486 status.go:176] ha-754161 status: &{Name:ha-754161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:23:01.914886 3643486 status.go:174] checking status of ha-754161-m02 ...
	I0127 02:23:01.915203 3643486 cli_runner.go:164] Run: docker container inspect ha-754161-m02 --format={{.State.Status}}
	I0127 02:23:01.945614 3643486 status.go:371] ha-754161-m02 host status = "Stopped" (err=<nil>)
	I0127 02:23:01.945640 3643486 status.go:384] host is not running, skipping remaining checks
	I0127 02:23:01.945648 3643486 status.go:176] ha-754161-m02 status: &{Name:ha-754161-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:23:01.945670 3643486 status.go:174] checking status of ha-754161-m03 ...
	I0127 02:23:01.946014 3643486 cli_runner.go:164] Run: docker container inspect ha-754161-m03 --format={{.State.Status}}
	I0127 02:23:01.977664 3643486 status.go:371] ha-754161-m03 host status = "Running" (err=<nil>)
	I0127 02:23:01.977692 3643486 host.go:66] Checking if "ha-754161-m03" exists ...
	I0127 02:23:01.978018 3643486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-754161-m03
	I0127 02:23:01.999506 3643486 host.go:66] Checking if "ha-754161-m03" exists ...
	I0127 02:23:01.999854 3643486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:23:01.999925 3643486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-754161-m03
	I0127 02:23:02.023407 3643486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37515 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/ha-754161-m03/id_rsa Username:docker}
	I0127 02:23:02.112279 3643486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:23:02.125157 3643486 kubeconfig.go:125] found "ha-754161" server: "https://192.168.49.254:8443"
	I0127 02:23:02.125184 3643486 api_server.go:166] Checking apiserver status ...
	I0127 02:23:02.125230 3643486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:23:02.139698 3643486 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	I0127 02:23:02.149744 3643486 api_server.go:182] apiserver freezer: "9:freezer:/docker/bd0784b1ffe44745fb7c72bd14930750bf0ee53af506558e40a7812fdf9357b0/kubepods/burstable/poddefa05bce071346b0523487ea0f5d3fd/e8779f414a2649707d9c5da4dbd89a37e31eaf0d5e664292cc679c3743208e7a"
	I0127 02:23:02.149830 3643486 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bd0784b1ffe44745fb7c72bd14930750bf0ee53af506558e40a7812fdf9357b0/kubepods/burstable/poddefa05bce071346b0523487ea0f5d3fd/e8779f414a2649707d9c5da4dbd89a37e31eaf0d5e664292cc679c3743208e7a/freezer.state
	I0127 02:23:02.159045 3643486 api_server.go:204] freezer state: "THAWED"
	I0127 02:23:02.159088 3643486 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 02:23:02.167693 3643486 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 02:23:02.167736 3643486 status.go:463] ha-754161-m03 apiserver status = Running (err=<nil>)
	I0127 02:23:02.167747 3643486 status.go:176] ha-754161-m03 status: &{Name:ha-754161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:23:02.167765 3643486 status.go:174] checking status of ha-754161-m04 ...
	I0127 02:23:02.168096 3643486 cli_runner.go:164] Run: docker container inspect ha-754161-m04 --format={{.State.Status}}
	I0127 02:23:02.185973 3643486 status.go:371] ha-754161-m04 host status = "Running" (err=<nil>)
	I0127 02:23:02.186001 3643486 host.go:66] Checking if "ha-754161-m04" exists ...
	I0127 02:23:02.186372 3643486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-754161-m04
	I0127 02:23:02.205198 3643486 host.go:66] Checking if "ha-754161-m04" exists ...
	I0127 02:23:02.205664 3643486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:23:02.205735 3643486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-754161-m04
	I0127 02:23:02.223693 3643486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37520 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/ha-754161-m04/id_rsa Username:docker}
	I0127 02:23:02.323181 3643486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:23:02.334732 3643486 status.go:176] ha-754161-m04 status: &{Name:ha-754161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-754161 node start m02 -v=7 --alsologtostderr: (17.481385435s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.022941277s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-754161 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-754161 -v=7 --alsologtostderr
E0127 02:23:34.794310 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:34.800704 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:34.812038 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:34.833507 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:34.874950 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:34.956401 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:35.118002 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:35.439772 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:36.081437 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:37.363450 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:39.926251 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:45.051577 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:23:55.294389 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-754161 -v=7 --alsologtostderr: (37.442144656s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-754161 --wait=true -v=7 --alsologtostderr
E0127 02:24:15.776120 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:24:56.737820 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-754161 --wait=true -v=7 --alsologtostderr: (1m29.641820552s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-754161
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-754161 node delete m03 -v=7 --alsologtostderr: (9.90251903s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-754161 stop -v=7 --alsologtostderr: (35.680517314s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr: exit status 7 (121.795877ms)

                                                
                                                
-- stdout --
	ha-754161
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-754161-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-754161-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:26:17.307706 3658083 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:26:17.307853 3658083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:26:17.307865 3658083 out.go:358] Setting ErrFile to fd 2...
	I0127 02:26:17.307870 3658083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:26:17.308123 3658083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:26:17.308330 3658083 out.go:352] Setting JSON to false
	I0127 02:26:17.308371 3658083 mustload.go:65] Loading cluster: ha-754161
	I0127 02:26:17.308468 3658083 notify.go:220] Checking for updates...
	I0127 02:26:17.308810 3658083 config.go:182] Loaded profile config "ha-754161": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:26:17.308837 3658083 status.go:174] checking status of ha-754161 ...
	I0127 02:26:17.309377 3658083 cli_runner.go:164] Run: docker container inspect ha-754161 --format={{.State.Status}}
	I0127 02:26:17.329226 3658083 status.go:371] ha-754161 host status = "Stopped" (err=<nil>)
	I0127 02:26:17.329253 3658083 status.go:384] host is not running, skipping remaining checks
	I0127 02:26:17.329260 3658083 status.go:176] ha-754161 status: &{Name:ha-754161 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:26:17.329288 3658083 status.go:174] checking status of ha-754161-m02 ...
	I0127 02:26:17.329593 3658083 cli_runner.go:164] Run: docker container inspect ha-754161-m02 --format={{.State.Status}}
	I0127 02:26:17.355121 3658083 status.go:371] ha-754161-m02 host status = "Stopped" (err=<nil>)
	I0127 02:26:17.355150 3658083 status.go:384] host is not running, skipping remaining checks
	I0127 02:26:17.355156 3658083 status.go:176] ha-754161-m02 status: &{Name:ha-754161-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:26:17.355183 3658083 status.go:174] checking status of ha-754161-m04 ...
	I0127 02:26:17.355480 3658083 cli_runner.go:164] Run: docker container inspect ha-754161-m04 --format={{.State.Status}}
	I0127 02:26:17.373004 3658083 status.go:371] ha-754161-m04 host status = "Stopped" (err=<nil>)
	I0127 02:26:17.373031 3658083 status.go:384] host is not running, skipping remaining checks
	I0127 02:26:17.373037 3658083 status.go:176] ha-754161-m04 status: &{Name:ha-754161-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (63.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-754161 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 02:26:18.660192 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:27:06.564902 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-754161 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.035405339s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (63.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-754161 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-754161 --control-plane -v=7 --alsologtostderr: (44.220567949s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-754161 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.032052336s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-053483 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0127 02:28:34.788071 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:29:02.506756 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-053483 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.898915925s)
--- PASS: TestJSONOutput/start/Command (51.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-053483 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-053483 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-053483 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-053483 --output=json --user=testUser: (5.818082734s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-430412 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-430412 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.866909ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8a48a1c0-daba-4e48-adb6-32a1b7c99289","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-430412] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2945a20c-6205-4e6f-a188-e032e5bb5953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20316"}}
	{"specversion":"1.0","id":"488c65c7-c0b6-494c-ac42-56469ad2dd80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"948be05e-1c37-4775-9490-9275151c68e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig"}}
	{"specversion":"1.0","id":"2367258a-8f2f-4a77-b78c-a9b65f28697c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube"}}
	{"specversion":"1.0","id":"e5b9f316-995a-4138-bde4-df5e67fa0e04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b0ba8954-856b-4f90-a8af-92bf760c0ab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0bb01259-7a2a-4562-b71e-1bfb757c1b25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-430412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-430412
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-340439 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-340439 --network=: (38.063094316s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-340439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-340439
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-340439: (2.077957535s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.16s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-926814 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-926814 --network=bridge: (31.575120337s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-926814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-926814
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-926814: (2.017700491s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.61s)

                                                
                                    
x
+
TestKicExistingNetwork (33.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0127 02:30:32.375519 3586800 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 02:30:32.391859 3586800 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 02:30:32.392577 3586800 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0127 02:30:32.393386 3586800 cli_runner.go:164] Run: docker network inspect existing-network
W0127 02:30:32.408680 3586800 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0127 02:30:32.408719 3586800 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0127 02:30:32.408734 3586800 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0127 02:30:32.408927 3586800 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 02:30:32.425467 3586800 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20c6b9faf740 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a5:84:e8:b3} reservation:<nil>}
I0127 02:30:32.427171 3586800 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001f8b030}
I0127 02:30:32.427829 3586800 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0127 02:30:32.428536 3586800 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0127 02:30:32.513667 3586800 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-693535 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-693535 --network=existing-network: (31.080533126s)
helpers_test.go:175: Cleaning up "existing-network-693535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-693535
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-693535: (2.037039087s)
I0127 02:31:05.648584 3586800 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.29s)

                                                
                                    
x
+
TestKicCustomSubnet (31.66s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-979876 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-979876 --subnet=192.168.60.0/24: (29.563510118s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-979876 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-979876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-979876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-979876: (2.068668876s)
--- PASS: TestKicCustomSubnet (31.66s)

                                                
                                    
x
+
TestKicStaticIP (31.32s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-342547 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-342547 --static-ip=192.168.200.200: (29.094601425s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-342547 ip
E0127 02:32:06.565188 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:175: Cleaning up "static-ip-342547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-342547
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-342547: (2.070305012s)
--- PASS: TestKicStaticIP (31.32s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (66.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-344671 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-344671 --driver=docker  --container-runtime=containerd: (29.035452377s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-347329 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-347329 --driver=docker  --container-runtime=containerd: (31.574989077s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-344671
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-347329
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-347329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-347329
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-347329: (2.026314461s)
helpers_test.go:175: Cleaning up "first-344671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-344671
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-344671: (2.216453409s)
--- PASS: TestMinikubeProfile (66.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-420142 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-420142 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.36586438s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-420142 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-422448 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-422448 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.753320611s)
E0127 02:33:29.630682 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountSecond (8.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-422448 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-420142 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-420142 --alsologtostderr -v=5: (1.621621026s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-422448 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-422448
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-422448: (1.203249763s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-422448
E0127 02:33:34.788160 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-422448: (6.553259139s)
--- PASS: TestMountStart/serial/RestartStopped (7.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-422448 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-136196 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-136196 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.32317921s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-136196 -- rollout status deployment/busybox: (18.61417768s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-5n8n4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-tvxg6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-5n8n4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-tvxg6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-5n8n4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-tvxg6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-5n8n4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-5n8n4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-tvxg6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-136196 -- exec busybox-58667487b6-tvxg6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-136196 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-136196 -v 3 --alsologtostderr: (15.798428457s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-136196 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp testdata/cp-test.txt multinode-136196:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1481621991/001/cp-test_multinode-136196.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196:/home/docker/cp-test.txt multinode-136196-m02:/home/docker/cp-test_multinode-136196_multinode-136196-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m02 "sudo cat /home/docker/cp-test_multinode-136196_multinode-136196-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196:/home/docker/cp-test.txt multinode-136196-m03:/home/docker/cp-test_multinode-136196_multinode-136196-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m03 "sudo cat /home/docker/cp-test_multinode-136196_multinode-136196-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp testdata/cp-test.txt multinode-136196-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1481621991/001/cp-test_multinode-136196-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196-m02:/home/docker/cp-test.txt multinode-136196:/home/docker/cp-test_multinode-136196-m02_multinode-136196.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196 "sudo cat /home/docker/cp-test_multinode-136196-m02_multinode-136196.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196-m02:/home/docker/cp-test.txt multinode-136196-m03:/home/docker/cp-test_multinode-136196-m02_multinode-136196-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m03 "sudo cat /home/docker/cp-test_multinode-136196-m02_multinode-136196-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp testdata/cp-test.txt multinode-136196-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1481621991/001/cp-test_multinode-136196-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196-m03:/home/docker/cp-test.txt multinode-136196:/home/docker/cp-test_multinode-136196-m03_multinode-136196.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196 "sudo cat /home/docker/cp-test_multinode-136196-m03_multinode-136196.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 cp multinode-136196-m03:/home/docker/cp-test.txt multinode-136196-m02:/home/docker/cp-test_multinode-136196-m03_multinode-136196-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 ssh -n multinode-136196-m02 "sudo cat /home/docker/cp-test_multinode-136196-m03_multinode-136196-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-136196 node stop m03: (1.242766085s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-136196 status: exit status 7 (509.741444ms)

                                                
                                                
-- stdout --
	multinode-136196
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136196-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136196-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr: exit status 7 (533.806756ms)

                                                
                                                
-- stdout --
	multinode-136196
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-136196-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-136196-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:35:41.748947 3712723 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:35:41.749065 3712723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:35:41.749076 3712723 out.go:358] Setting ErrFile to fd 2...
	I0127 02:35:41.749081 3712723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:35:41.749336 3712723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:35:41.749519 3712723 out.go:352] Setting JSON to false
	I0127 02:35:41.749563 3712723 mustload.go:65] Loading cluster: multinode-136196
	I0127 02:35:41.749669 3712723 notify.go:220] Checking for updates...
	I0127 02:35:41.749993 3712723 config.go:182] Loaded profile config "multinode-136196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:35:41.750019 3712723 status.go:174] checking status of multinode-136196 ...
	I0127 02:35:41.750650 3712723 cli_runner.go:164] Run: docker container inspect multinode-136196 --format={{.State.Status}}
	I0127 02:35:41.770784 3712723 status.go:371] multinode-136196 host status = "Running" (err=<nil>)
	I0127 02:35:41.770809 3712723 host.go:66] Checking if "multinode-136196" exists ...
	I0127 02:35:41.771116 3712723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-136196
	I0127 02:35:41.804142 3712723 host.go:66] Checking if "multinode-136196" exists ...
	I0127 02:35:41.804454 3712723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:35:41.804509 3712723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-136196
	I0127 02:35:41.822043 3712723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37625 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/multinode-136196/id_rsa Username:docker}
	I0127 02:35:41.912858 3712723 ssh_runner.go:195] Run: systemctl --version
	I0127 02:35:41.917458 3712723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:35:41.928967 3712723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:35:41.984364 3712723 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-27 02:35:41.97508461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:35:41.985023 3712723 kubeconfig.go:125] found "multinode-136196" server: "https://192.168.67.2:8443"
	I0127 02:35:41.985061 3712723 api_server.go:166] Checking apiserver status ...
	I0127 02:35:41.985113 3712723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 02:35:41.996770 3712723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1482/cgroup
	I0127 02:35:42.012884 3712723 api_server.go:182] apiserver freezer: "9:freezer:/docker/c3d5a7818e1fba94e1643aa74e118babda63bf99d75b8da9fdb5519d68e8a0e9/kubepods/burstable/pod34b9d51dc714ebe1477591cded3ddeee/133eae4c8c72235fdd784237e64f732e9ab3abf723e715e5759973460719be01"
	I0127 02:35:42.013028 3712723 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c3d5a7818e1fba94e1643aa74e118babda63bf99d75b8da9fdb5519d68e8a0e9/kubepods/burstable/pod34b9d51dc714ebe1477591cded3ddeee/133eae4c8c72235fdd784237e64f732e9ab3abf723e715e5759973460719be01/freezer.state
	I0127 02:35:42.023506 3712723 api_server.go:204] freezer state: "THAWED"
	I0127 02:35:42.023536 3712723 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0127 02:35:42.032876 3712723 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0127 02:35:42.032908 3712723 status.go:463] multinode-136196 apiserver status = Running (err=<nil>)
	I0127 02:35:42.032920 3712723 status.go:176] multinode-136196 status: &{Name:multinode-136196 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:35:42.032939 3712723 status.go:174] checking status of multinode-136196-m02 ...
	I0127 02:35:42.033277 3712723 cli_runner.go:164] Run: docker container inspect multinode-136196-m02 --format={{.State.Status}}
	I0127 02:35:42.051248 3712723 status.go:371] multinode-136196-m02 host status = "Running" (err=<nil>)
	I0127 02:35:42.051276 3712723 host.go:66] Checking if "multinode-136196-m02" exists ...
	I0127 02:35:42.051587 3712723 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-136196-m02
	I0127 02:35:42.072034 3712723 host.go:66] Checking if "multinode-136196-m02" exists ...
	I0127 02:35:42.072378 3712723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 02:35:42.072431 3712723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-136196-m02
	I0127 02:35:42.092939 3712723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37630 SSHKeyPath:/home/jenkins/minikube-integration/20316-3581420/.minikube/machines/multinode-136196-m02/id_rsa Username:docker}
	I0127 02:35:42.187976 3712723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 02:35:42.202444 3712723 status.go:176] multinode-136196-m02 status: &{Name:multinode-136196-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:35:42.202495 3712723 status.go:174] checking status of multinode-136196-m03 ...
	I0127 02:35:42.202902 3712723 cli_runner.go:164] Run: docker container inspect multinode-136196-m03 --format={{.State.Status}}
	I0127 02:35:42.222665 3712723 status.go:371] multinode-136196-m03 host status = "Stopped" (err=<nil>)
	I0127 02:35:42.222692 3712723 status.go:384] host is not running, skipping remaining checks
	I0127 02:35:42.222699 3712723 status.go:176] multinode-136196-m03 status: &{Name:multinode-136196-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-136196 node start m03 -v=7 --alsologtostderr: (8.854712199s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (127.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-136196
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-136196
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-136196: (24.852753027s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-136196 --wait=true -v=8 --alsologtostderr
E0127 02:37:06.565571 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-136196 --wait=true -v=8 --alsologtostderr: (1m42.619379727s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-136196
--- PASS: TestMultiNode/serial/RestartKeepsNodes (127.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-136196 node delete m03: (4.989944869s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-136196 stop: (23.674170512s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-136196 status: exit status 7 (101.735562ms)

                                                
                                                
-- stdout --
	multinode-136196
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136196-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr: exit status 7 (93.913398ms)

                                                
                                                
-- stdout --
	multinode-136196
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-136196-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:38:29.056492 3721249 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:38:29.056608 3721249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:38:29.056620 3721249 out.go:358] Setting ErrFile to fd 2...
	I0127 02:38:29.056625 3721249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:38:29.056874 3721249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:38:29.057051 3721249 out.go:352] Setting JSON to false
	I0127 02:38:29.057093 3721249 mustload.go:65] Loading cluster: multinode-136196
	I0127 02:38:29.057187 3721249 notify.go:220] Checking for updates...
	I0127 02:38:29.057514 3721249 config.go:182] Loaded profile config "multinode-136196": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:38:29.057538 3721249 status.go:174] checking status of multinode-136196 ...
	I0127 02:38:29.058085 3721249 cli_runner.go:164] Run: docker container inspect multinode-136196 --format={{.State.Status}}
	I0127 02:38:29.077042 3721249 status.go:371] multinode-136196 host status = "Stopped" (err=<nil>)
	I0127 02:38:29.077065 3721249 status.go:384] host is not running, skipping remaining checks
	I0127 02:38:29.077072 3721249 status.go:176] multinode-136196 status: &{Name:multinode-136196 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 02:38:29.077101 3721249 status.go:174] checking status of multinode-136196-m02 ...
	I0127 02:38:29.077423 3721249 cli_runner.go:164] Run: docker container inspect multinode-136196-m02 --format={{.State.Status}}
	I0127 02:38:29.101424 3721249 status.go:371] multinode-136196-m02 host status = "Stopped" (err=<nil>)
	I0127 02:38:29.101451 3721249 status.go:384] host is not running, skipping remaining checks
	I0127 02:38:29.101457 3721249 status.go:176] multinode-136196-m02 status: &{Name:multinode-136196-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-136196 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 02:38:34.787895 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-136196 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.343633475s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-136196 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-136196
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-136196-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-136196-m02 --driver=docker  --container-runtime=containerd: exit status 14 (105.585039ms)

                                                
                                                
-- stdout --
	* [multinode-136196-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-136196-m02' is duplicated with machine name 'multinode-136196-m02' in profile 'multinode-136196'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-136196-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-136196-m03 --driver=docker  --container-runtime=containerd: (31.742584639s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-136196
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-136196: exit status 80 (348.532892ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-136196 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-136196-m03 already exists in multinode-136196-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-136196-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-136196-m03: (1.973789366s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.25s)

                                                
                                    
x
+
TestPreload (112.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-965764 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-965764 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.838489708s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-965764 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-965764 image pull gcr.io/k8s-minikube/busybox: (2.006118631s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-965764
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-965764: (11.946827459s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-965764 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-965764 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.521777214s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-965764 image list
helpers_test.go:175: Cleaning up "test-preload-965764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-965764
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-965764: (2.606704524s)
--- PASS: TestPreload (112.26s)

                                                
                                    
x
+
TestInsufficientStorage (11.71s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-967263 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-967263 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.249199077s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d99a42e1-f295-4593-a21d-dec60e89771c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-967263] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd388173-9c13-4626-92b2-58e54aae4cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20316"}}
	{"specversion":"1.0","id":"d9f97a02-6c64-4c66-922f-2bd54b8de1c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"afd07c25-82bc-45cb-848d-b6706857e9e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig"}}
	{"specversion":"1.0","id":"4f6f1a5a-c10a-4c80-8390-836431142d5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube"}}
	{"specversion":"1.0","id":"ffae0fa1-32c7-4885-a9c3-1a4a5bf8b1f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d0410196-281a-4531-a6dc-de25cf6c052c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e13f9c8b-50a8-47a0-aa3b-fcd57d235a95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2f1808ad-a621-413a-a7fd-c899cdb0b124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"62c0a5f8-00ec-46e1-a85a-81dea758e73b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"690a4788-76a2-4b78-a515-70bc900f8fd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a2744a13-c654-4222-a03f-219427014a7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-967263\" primary control-plane node in \"insufficient-storage-967263\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e126e3c9-b5b2-45b5-ba62-dd9c5faff886","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad3d48e6-6601-4c67-bbe0-6f923c8c3e22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"83e5f1a0-c78f-4899-a258-b1ba1f7c4d99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-967263 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-967263 --output=json --layout=cluster: exit status 7 (270.827103ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-967263","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-967263","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 02:42:40.871656 3739737 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-967263" does not appear in /home/jenkins/minikube-integration/20316-3581420/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-967263 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-967263 --output=json --layout=cluster: exit status 7 (282.574258ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-967263","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-967263","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 02:42:41.155570 3739798 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-967263" does not appear in /home/jenkins/minikube-integration/20316-3581420/kubeconfig
	E0127 02:42:41.165570 3739798 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/insufficient-storage-967263/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-967263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-967263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-967263: (1.908793644s)
--- PASS: TestInsufficientStorage (11.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1060155022 start -p running-upgrade-409880 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1060155022 start -p running-upgrade-409880 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.41862342s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-409880 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-409880 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.47031691s)
helpers_test.go:175: Cleaning up "running-upgrade-409880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-409880
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-409880: (3.068979134s)
--- PASS: TestRunningBinaryUpgrade (94.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (107.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.627495672s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-873038
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-873038: (1.288361012s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-873038 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-873038 status --format={{.Host}}: exit status 7 (90.90958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.911050545s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-873038 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (133.445455ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-873038] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-873038
	    minikube start -p kubernetes-upgrade-873038 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8730382 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-873038 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-873038 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.443420559s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-873038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-873038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-873038: (2.60796947s)
--- PASS: TestKubernetesUpgrade (107.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (179.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2010645572 start -p missing-upgrade-448818 --memory=2200 --driver=docker  --container-runtime=containerd
E0127 02:43:34.789977 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2010645572 start -p missing-upgrade-448818 --memory=2200 --driver=docker  --container-runtime=containerd: (1m40.104663718s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-448818
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-448818
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-448818 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-448818 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m15.337736738s)
helpers_test.go:175: Cleaning up "missing-upgrade-448818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-448818
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-448818: (2.732731456s)
--- PASS: TestMissingContainerUpgrade (179.55s)

                                                
                                    
x
+
TestPause/serial/Start (55.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-587295 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-587295 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (55.727048797s)
--- PASS: TestPause/serial/Start (55.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-587295 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-587295 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.204964065s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-587295 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-587295 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-587295 --output=json --layout=cluster: exit status 2 (322.906801ms)

                                                
                                                
-- stdout --
	{"Name":"pause-587295","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-587295","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-587295 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-587295 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.51s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-587295 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-587295 --alsologtostderr -v=5: (2.50673233s)
--- PASS: TestPause/serial/DeletePaused (2.51s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-587295
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-587295: exit status 1 (18.26398ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-587295: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3144188744 start -p stopped-upgrade-070324 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3144188744 start -p stopped-upgrade-070324 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (38.98649465s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3144188744 -p stopped-upgrade-070324 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3144188744 -p stopped-upgrade-070324 stop: (20.030717063s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-070324 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0127 02:47:06.573219 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-070324 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.224247373s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-070324
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-070324: (1.318404602s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401054 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-401054 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (117.19582ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-401054] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401054 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401054 --driver=docker  --container-runtime=containerd: (36.994831814s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-401054 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-155509 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-155509 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (311.79782ms)

                                                
                                                
-- stdout --
	* [false-155509] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20316
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 02:48:06.858603 3772253 out.go:345] Setting OutFile to fd 1 ...
	I0127 02:48:06.858772 3772253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:48:06.858822 3772253 out.go:358] Setting ErrFile to fd 2...
	I0127 02:48:06.858842 3772253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 02:48:06.859095 3772253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20316-3581420/.minikube/bin
	I0127 02:48:06.859509 3772253 out.go:352] Setting JSON to false
	I0127 02:48:06.860485 3772253 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":91831,"bootTime":1737854256,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0127 02:48:06.860580 3772253 start.go:139] virtualization:  
	I0127 02:48:06.869377 3772253 out.go:177] * [false-155509] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 02:48:06.872630 3772253 out.go:177]   - MINIKUBE_LOCATION=20316
	I0127 02:48:06.872770 3772253 notify.go:220] Checking for updates...
	I0127 02:48:06.878439 3772253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 02:48:06.881263 3772253 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20316-3581420/kubeconfig
	I0127 02:48:06.884108 3772253 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20316-3581420/.minikube
	I0127 02:48:06.886964 3772253 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 02:48:06.889799 3772253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 02:48:06.893129 3772253 config.go:182] Loaded profile config "NoKubernetes-401054": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 02:48:06.893242 3772253 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 02:48:06.935689 3772253 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 02:48:06.935787 3772253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 02:48:07.039880 3772253 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 02:48:07.029945302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 02:48:07.039989 3772253 docker.go:318] overlay module found
	I0127 02:48:07.043211 3772253 out.go:177] * Using the docker driver based on user configuration
	I0127 02:48:07.046058 3772253 start.go:297] selected driver: docker
	I0127 02:48:07.046081 3772253 start.go:901] validating driver "docker" against <nil>
	I0127 02:48:07.046116 3772253 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 02:48:07.049631 3772253 out.go:201] 
	W0127 02:48:07.052493 3772253 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 02:48:07.055285 3772253 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-155509 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-155509" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 02:48:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-401054
contexts:
- context:
cluster: NoKubernetes-401054
extensions:
- extension:
last-update: Mon, 27 Jan 2025 02:48:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-401054
name: NoKubernetes-401054
current-context: NoKubernetes-401054
kind: Config
preferences: {}
users:
- name: NoKubernetes-401054
user:
client-certificate: /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/NoKubernetes-401054/client.crt
client-key: /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/NoKubernetes-401054/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-155509

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-155509"

                                                
                                                
----------------------- debugLogs end: false-155509 [took: 5.676500068s] --------------------------------
helpers_test.go:175: Cleaning up "false-155509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-155509
--- PASS: TestNetworkPlugins/group/false (6.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401054 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401054 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.188632815s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-401054 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-401054 status -o json: exit status 2 (382.717048ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-401054","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-401054
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-401054: (2.31515114s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401054 --no-kubernetes --driver=docker  --container-runtime=containerd
E0127 02:48:34.790237 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401054 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.105096935s)
--- PASS: TestNoKubernetes/serial/Start (9.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-401054 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-401054 "sudo systemctl is-active --quiet service kubelet": exit status 1 (329.145691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-401054
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-401054: (1.243430168s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-401054 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-401054 --driver=docker  --container-runtime=containerd: (7.624520242s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-401054 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-401054 "sudo systemctl is-active --quiet service kubelet": exit status 1 (426.163995ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (177.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-949994 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0127 02:50:09.633036 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-949994 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m57.228789755s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (177.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-715478 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 02:52:06.565741 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-715478 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m6.860988428s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-949994 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [779bdee3-be7d-4a1b-b3c9-913b43002df5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [779bdee3-be7d-4a1b-b3c9-913b43002df5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003465387s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-949994 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-715478 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f56f3d0f-5f71-441a-b419-814b9051ce6d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f56f3d0f-5f71-441a-b419-814b9051ce6d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004083392s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-715478 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-949994 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-949994 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-949994 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-949994 --alsologtostderr -v=3: (12.0836983s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-715478 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-715478 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-715478 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-715478 --alsologtostderr -v=3: (12.010609881s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-949994 -n old-k8s-version-949994
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-949994 -n old-k8s-version-949994: exit status 7 (73.350305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-949994 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-715478 -n no-preload-715478
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-715478 -n no-preload-715478: exit status 7 (86.560997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-715478 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (304.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-715478 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 02:53:34.787689 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:56:37.871374 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 02:57:06.564844 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-715478 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (5m3.675069443s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-715478 -n no-preload-715478
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (304.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bvfpm" [f74a5f48-3ddb-4705-867a-845ae9390fad] Running
E0127 02:58:34.787364 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003494365s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-bvfpm" [f74a5f48-3ddb-4705-867a-845ae9390fad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003616963s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-715478 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-715478 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-715478 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-715478 -n no-preload-715478
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-715478 -n no-preload-715478: exit status 2 (328.590333ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-715478 -n no-preload-715478
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-715478 -n no-preload-715478: exit status 2 (319.168186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-715478 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-715478 -n no-preload-715478
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-715478 -n no-preload-715478
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-579827 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-579827 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m27.362363922s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9bnwh" [29677eeb-2716-494d-afda-49e5dcacffdf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004347312s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9bnwh" [29677eeb-2716-494d-afda-49e5dcacffdf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004665673s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-949994 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-949994 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-949994 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-949994 -n old-k8s-version-949994
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-949994 -n old-k8s-version-949994: exit status 2 (322.859273ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-949994 -n old-k8s-version-949994
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-949994 -n old-k8s-version-949994: exit status 2 (327.257683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-949994 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-949994 -n old-k8s-version-949994
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-949994 -n old-k8s-version-949994
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-819881 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-819881 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (56.212540588s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-579827 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd585990-a7fb-4be0-8915-5cf7ecc6a5bd] Pending
helpers_test.go:344: "busybox" [fd585990-a7fb-4be0-8915-5cf7ecc6a5bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd585990-a7fb-4be0-8915-5cf7ecc6a5bd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004110236s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-579827 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-579827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-579827 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.424759525s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-579827 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-579827 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-579827 --alsologtostderr -v=3: (12.160915127s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-579827 -n embed-certs-579827
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-579827 -n embed-certs-579827: exit status 7 (79.580198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-579827 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-579827 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-579827 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m25.706867164s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-579827 -n embed-certs-579827
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-819881 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c1fd931f-b509-4d19-b9e4-844b4a49a41c] Pending
helpers_test.go:344: "busybox" [c1fd931f-b509-4d19-b9e4-844b4a49a41c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c1fd931f-b509-4d19-b9e4-844b4a49a41c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006401003s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-819881 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-819881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-819881 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.226295486s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-819881 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-819881 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-819881 --alsologtostderr -v=3: (12.171409677s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881: exit status 7 (81.074705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-819881 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-819881 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 03:02:06.564887 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:00.636497 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:00.642925 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:00.654272 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:00.675741 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:00.717302 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:00.798876 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:00.960255 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:01.282211 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:01.923937 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:03.205690 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:05.767337 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:06.359220 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:06.365843 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:06.377338 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:06.398803 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:06.440204 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:06.521964 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:06.683606 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:07.005557 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:07.647656 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:08.929019 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:10.888697 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:11.490722 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:16.612756 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:21.130720 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:26.855043 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:34.787554 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/functional-368775/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:41.612054 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:03:47.337439 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:04:22.573415 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:04:28.299558 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-819881 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m58.375892449s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (298.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tplll" [7e6a0efa-54b5-472f-bce7-e26bd678449d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003310724s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tplll" [7e6a0efa-54b5-472f-bce7-e26bd678449d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004246967s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-579827 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-579827 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-579827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-579827 -n embed-certs-579827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-579827 -n embed-certs-579827: exit status 2 (316.017013ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-579827 -n embed-certs-579827
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-579827 -n embed-certs-579827: exit status 2 (336.166643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-579827 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-579827 -n embed-certs-579827
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-579827 -n embed-certs-579827
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (32.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-344851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 03:05:44.495443 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:05:50.221862 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-344851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (32.855466537s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (32.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-344851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-344851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092831998s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-344851 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-344851 --alsologtostderr -v=3: (1.263818525s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-344851 -n newest-cni-344851
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-344851 -n newest-cni-344851: exit status 7 (70.797779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-344851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-344851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-344851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (16.537003634s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-344851 -n newest-cni-344851
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-344851 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-344851 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-344851 -n newest-cni-344851
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-344851 -n newest-cni-344851: exit status 2 (392.065918ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-344851 -n newest-cni-344851
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-344851 -n newest-cni-344851: exit status 2 (329.188908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-344851 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-344851 -n newest-cni-344851
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-344851 -n newest-cni-344851
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-k5q75" [9f6dcc04-3118-4376-950e-b6475f03a8d8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004080855s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m9.691007617s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-k5q75" [9f6dcc04-3118-4376-950e-b6475f03a8d8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007593278s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-819881 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-819881 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-819881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881: exit status 2 (377.133173ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881: exit status 2 (395.789959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-819881 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-819881 -n default-k8s-diff-port-819881
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.85s)
E0127 03:12:06.565408 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:16.698857 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.285204 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.291536 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.303164 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.324484 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.365892 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.447248 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.608726 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:31.930431 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:32.572547 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:12:33.854012 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0127 03:06:49.634490 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:07:06.564667 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/addons-791589/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m27.069831805s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-155509 "pgrep -a kubelet"
I0127 03:07:30.996209 3586800 config.go:182] Loaded profile config "auto-155509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-155509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wkbnj" [ca66b702-7dfa-485e-a077-71d67611bd68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wkbnj" [ca66b702-7dfa-485e-a077-71d67611bd68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004444382s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-155509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m24.875872023s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ffb66" [696a2c4d-e994-4222-b086-931bb0f2e50d] Running
E0127 03:08:06.359556 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/no-preload-715478/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003732817s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-155509 "pgrep -a kubelet"
I0127 03:08:09.725539 3586800 config.go:182] Loaded profile config "kindnet-155509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-155509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mlhch" [ef061b9f-d418-4c16-92e7-71950f9369db] Pending
helpers_test.go:344: "netcat-5d86dc444-mlhch" [ef061b9f-d418-4c16-92e7-71950f9369db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-mlhch" [ef061b9f-d418-4c16-92e7-71950f9369db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00519548s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-155509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (51.968648854s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7qj6l" [b3353342-d6b2-4257-b1aa-72c1fdeb8a8b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005320774s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-155509 "pgrep -a kubelet"
I0127 03:09:33.162456 3586800 config.go:182] Loaded profile config "calico-155509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-155509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cc47q" [ad7c546d-19ae-4da0-8bff-7226e9afd714] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cc47q" [ad7c546d-19ae-4da0-8bff-7226e9afd714] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.01040435s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-155509 "pgrep -a kubelet"
I0127 03:09:39.642060 3586800 config.go:182] Loaded profile config "custom-flannel-155509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-155509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rhklp" [a04f2c68-f81e-4c0f-8015-d750a5fa8ab2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-rhklp" [a04f2c68-f81e-4c0f-8015-d750a5fa8ab2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005069458s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-155509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-155509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m20.616823812s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0127 03:10:54.759734 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:54.766314 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:54.777866 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:54.799619 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:54.841217 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:54.922740 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:55.084313 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:55.405794 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:56.047850 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:57.329589 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:10:59.891835 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:11:05.013386 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.42956399s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jl6jt" [b3e2f960-41bd-4874-a55e-8fd6745d3fd5] Running
E0127 03:11:15.254724 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00416343s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-155509 "pgrep -a kubelet"
I0127 03:11:20.650337 3586800 config.go:182] Loaded profile config "flannel-155509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-155509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zv5pr" [0fbd6235-8244-4fa5-ae9b-e997688d21da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zv5pr" [0fbd6235-8244-4fa5-ae9b-e997688d21da] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005771637s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-155509 "pgrep -a kubelet"
I0127 03:11:29.162404 3586800 config.go:182] Loaded profile config "enable-default-cni-155509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-155509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bz92p" [c0466b95-b763-4192-baad-d5a0728ce2ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bz92p" [c0466b95-b763-4192-baad-d5a0728ce2ed] Running
E0127 03:11:35.736677 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/default-k8s-diff-port-819881/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005173551s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-155509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-155509 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-155509 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (38.550615032s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-155509 "pgrep -a kubelet"
I0127 03:12:34.637310 3586800 config.go:182] Loaded profile config "bridge-155509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-155509 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-snf64" [feca80a8-5205-448e-b681-3acd5a1589f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 03:12:36.416203 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-snf64" [feca80a8-5205-448e-b681-3acd5a1589f0] Running
E0127 03:12:41.537640 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004038754s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-155509 exec deployment/netcat -- nslookup kubernetes.default
E0127 03:12:51.779661 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/auto-155509/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-155509 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.187607475s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 03:12:58.110353 3586800 retry.go:31] will retry after 1.392412016s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-155509 exec deployment/netcat -- nslookup kubernetes.default
E0127 03:13:00.636454 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/old-k8s-version-949994/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.284210 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.290594 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.302037 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.323500 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.365015 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.446482 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.607981 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:03.929678 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
E0127 03:13:04.571884 3586800 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/kindnet-155509/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context bridge-155509 exec deployment/netcat -- nslookup kubernetes.default: (5.153348384s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-155509 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-387861 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-387861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-387861
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-259545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-259545
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-155509 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-155509" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-155509

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-155509"

                                                
                                                
----------------------- debugLogs end: kubenet-155509 [took: 5.457483047s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-155509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-155509
--- SKIP: TestNetworkPlugins/group/kubenet (5.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-155509 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-155509" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20316-3581420/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 02:48:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-401054
contexts:
- context:
cluster: NoKubernetes-401054
extensions:
- extension:
last-update: Mon, 27 Jan 2025 02:48:08 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-401054
name: NoKubernetes-401054
current-context: NoKubernetes-401054
kind: Config
preferences: {}
users:
- name: NoKubernetes-401054
user:
client-certificate: /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/NoKubernetes-401054/client.crt
client-key: /home/jenkins/minikube-integration/20316-3581420/.minikube/profiles/NoKubernetes-401054/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-155509

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-155509" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-155509"

                                                
                                                
----------------------- debugLogs end: cilium-155509 [took: 4.346279968s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-155509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-155509
--- SKIP: TestNetworkPlugins/group/cilium (4.51s)

                                                
                                    
Copied to clipboard