Test Report: Docker_Linux_containerd_arm64 20318

                    
                      dd22c410311484da6763aae43511cabe19037b94:2025-01-27:38092
                    
                

Test fail (2/330)

Order failed test Duration
248 TestScheduledStopUnix 34.46
304 TestStartStop/group/old-k8s-version/serial/SecondStart 377.68
x
+
TestScheduledStopUnix (34.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-748426 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-748426 --memory=2048 --driver=docker  --container-runtime=containerd: (29.439569625s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-748426 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-748426 -n scheduled-stop-748426
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-748426 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 1042156 running but should have been killed on reschedule of stop
panic.go:629: *** TestScheduledStopUnix FAILED at 2025-01-27 11:58:13.611502429 +0000 UTC m=+2107.021986795
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-748426
helpers_test.go:235: (dbg) docker inspect scheduled-stop-748426:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed0ae98c2aeafbc5b94192af7883039095182b47faeb933e7e419f00d97b51a4",
	        "Created": "2025-01-27T11:57:49.188239919Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1040227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T11:57:49.354081947Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/ed0ae98c2aeafbc5b94192af7883039095182b47faeb933e7e419f00d97b51a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed0ae98c2aeafbc5b94192af7883039095182b47faeb933e7e419f00d97b51a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed0ae98c2aeafbc5b94192af7883039095182b47faeb933e7e419f00d97b51a4/hosts",
	        "LogPath": "/var/lib/docker/containers/ed0ae98c2aeafbc5b94192af7883039095182b47faeb933e7e419f00d97b51a4/ed0ae98c2aeafbc5b94192af7883039095182b47faeb933e7e419f00d97b51a4-json.log",
	        "Name": "/scheduled-stop-748426",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-748426:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-748426",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b2a84234e5ee9219c2baaecfaba5525bdd39e3be01922f0abd14ac9eb4621710-init/diff:/var/lib/docker/overlay2/027cb12703497bfe682a04123361dc92cd40ae4c78d3ee9eafeedefee7ad1bd7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b2a84234e5ee9219c2baaecfaba5525bdd39e3be01922f0abd14ac9eb4621710/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b2a84234e5ee9219c2baaecfaba5525bdd39e3be01922f0abd14ac9eb4621710/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b2a84234e5ee9219c2baaecfaba5525bdd39e3be01922f0abd14ac9eb4621710/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-748426",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-748426/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-748426",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-748426",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-748426",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6efa4f5b5085c2936077b6b481cd705f32930ca048bf1377aebe24116698ed0a",
	            "SandboxKey": "/var/run/docker/netns/6efa4f5b5085",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33762"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33763"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33766"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33764"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33765"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-748426": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d8817b8dee70e7fa2c0909aa8a7c51b5b1873b5f56d022f3950bad798740718d",
	                    "EndpointID": "65671326e221da3c624326bc090705f5670934ff6f269996f9a6e3df4f759268",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-748426",
	                        "ed0ae98c2aea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-748426 -n scheduled-stop-748426
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-748426 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-748426 logs -n 25: (1.256737711s)
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-407627            | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:52 UTC |
	| start   | -p multinode-407627            | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:52 UTC | 27 Jan 25 11:53 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-407627       | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC |                     |
	| node    | multinode-407627 node delete   | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:53 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-407627 stop          | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:53 UTC | 27 Jan 25 11:54 UTC |
	| start   | -p multinode-407627            | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:54 UTC | 27 Jan 25 11:55 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| node    | list -p multinode-407627       | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:55 UTC |                     |
	| start   | -p multinode-407627-m02        | multinode-407627-m02  | jenkins | v1.35.0 | 27 Jan 25 11:55 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| start   | -p multinode-407627-m03        | multinode-407627-m03  | jenkins | v1.35.0 | 27 Jan 25 11:55 UTC | 27 Jan 25 11:55 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| node    | add -p multinode-407627        | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:55 UTC |                     |
	| delete  | -p multinode-407627-m03        | multinode-407627-m03  | jenkins | v1.35.0 | 27 Jan 25 11:55 UTC | 27 Jan 25 11:55 UTC |
	| delete  | -p multinode-407627            | multinode-407627      | jenkins | v1.35.0 | 27 Jan 25 11:55 UTC | 27 Jan 25 11:55 UTC |
	| start   | -p test-preload-010440         | test-preload-010440   | jenkins | v1.35.0 | 27 Jan 25 11:55 UTC | 27 Jan 25 11:57 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-010440 image pull | test-preload-010440   | jenkins | v1.35.0 | 27 Jan 25 11:57 UTC | 27 Jan 25 11:57 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-010440         | test-preload-010440   | jenkins | v1.35.0 | 27 Jan 25 11:57 UTC | 27 Jan 25 11:57 UTC |
	| start   | -p test-preload-010440         | test-preload-010440   | jenkins | v1.35.0 | 27 Jan 25 11:57 UTC | 27 Jan 25 11:57 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| image   | test-preload-010440 image list | test-preload-010440   | jenkins | v1.35.0 | 27 Jan 25 11:57 UTC | 27 Jan 25 11:57 UTC |
	| delete  | -p test-preload-010440         | test-preload-010440   | jenkins | v1.35.0 | 27 Jan 25 11:57 UTC | 27 Jan 25 11:57 UTC |
	| start   | -p scheduled-stop-748426       | scheduled-stop-748426 | jenkins | v1.35.0 | 27 Jan 25 11:57 UTC | 27 Jan 25 11:58 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=containerd |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-748426       | scheduled-stop-748426 | jenkins | v1.35.0 | 27 Jan 25 11:58 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-748426       | scheduled-stop-748426 | jenkins | v1.35.0 | 27 Jan 25 11:58 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-748426       | scheduled-stop-748426 | jenkins | v1.35.0 | 27 Jan 25 11:58 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-748426       | scheduled-stop-748426 | jenkins | v1.35.0 | 27 Jan 25 11:58 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-748426       | scheduled-stop-748426 | jenkins | v1.35.0 | 27 Jan 25 11:58 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-748426       | scheduled-stop-748426 | jenkins | v1.35.0 | 27 Jan 25 11:58 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:57:43
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:57:43.698034 1039736 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:57:43.698147 1039736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:57:43.698151 1039736 out.go:358] Setting ErrFile to fd 2...
	I0127 11:57:43.698155 1039736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:57:43.698496 1039736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:57:43.698934 1039736 out.go:352] Setting JSON to false
	I0127 11:57:43.700021 1039736 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":16809,"bootTime":1737962255,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 11:57:43.700085 1039736 start.go:139] virtualization:  
	I0127 11:57:43.703664 1039736 out.go:177] * [scheduled-stop-748426] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:57:43.707511 1039736 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:57:43.707637 1039736 notify.go:220] Checking for updates...
	I0127 11:57:43.713547 1039736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:57:43.716344 1039736 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 11:57:43.719130 1039736 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 11:57:43.721891 1039736 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:57:43.724584 1039736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:57:43.727552 1039736 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:57:43.766448 1039736 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:57:43.766562 1039736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:57:43.822345 1039736 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 11:57:43.812602593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:57:43.822456 1039736 docker.go:318] overlay module found
	I0127 11:57:43.825290 1039736 out.go:177] * Using the docker driver based on user configuration
	I0127 11:57:43.828081 1039736 start.go:297] selected driver: docker
	I0127 11:57:43.828089 1039736 start.go:901] validating driver "docker" against <nil>
	I0127 11:57:43.828101 1039736 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:57:43.828886 1039736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:57:43.879895 1039736 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-01-27 11:57:43.870748795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:57:43.880126 1039736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:57:43.880333 1039736 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:57:43.882976 1039736 out.go:177] * Using Docker driver with root privileges
	I0127 11:57:43.885494 1039736 cni.go:84] Creating CNI manager for ""
	I0127 11:57:43.885542 1039736 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 11:57:43.885550 1039736 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:57:43.885627 1039736 start.go:340] cluster config:
	{Name:scheduled-stop-748426 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-748426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:57:43.890003 1039736 out.go:177] * Starting "scheduled-stop-748426" primary control-plane node in "scheduled-stop-748426" cluster
	I0127 11:57:43.892650 1039736 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 11:57:43.895363 1039736 out.go:177] * Pulling base image v0.0.46 ...
	I0127 11:57:43.897903 1039736 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:57:43.897944 1039736 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 11:57:43.897953 1039736 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 11:57:43.897961 1039736 cache.go:56] Caching tarball of preloaded images
	I0127 11:57:43.898065 1039736 preload.go:172] Found /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0127 11:57:43.898075 1039736 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0127 11:57:43.898409 1039736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/config.json ...
	I0127 11:57:43.898435 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/config.json: {Name:mk218504d197a75327b80a789dc665c3a883c3cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:43.916985 1039736 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 11:57:43.916996 1039736 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 11:57:43.917015 1039736 cache.go:227] Successfully downloaded all kic artifacts
	I0127 11:57:43.917068 1039736 start.go:360] acquireMachinesLock for scheduled-stop-748426: {Name:mk92c1565d00f8c50b0d670a17ff2446ac388089 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:57:43.917185 1039736 start.go:364] duration metric: took 103.037µs to acquireMachinesLock for "scheduled-stop-748426"
	I0127 11:57:43.917216 1039736 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-748426 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-748426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:57:43.917279 1039736 start.go:125] createHost starting for "" (driver="docker")
	I0127 11:57:43.920427 1039736 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0127 11:57:43.920666 1039736 start.go:159] libmachine.API.Create for "scheduled-stop-748426" (driver="docker")
	I0127 11:57:43.920702 1039736 client.go:168] LocalClient.Create starting
	I0127 11:57:43.920768 1039736 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem
	I0127 11:57:43.920799 1039736 main.go:141] libmachine: Decoding PEM data...
	I0127 11:57:43.920815 1039736 main.go:141] libmachine: Parsing certificate...
	I0127 11:57:43.920876 1039736 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem
	I0127 11:57:43.920897 1039736 main.go:141] libmachine: Decoding PEM data...
	I0127 11:57:43.920906 1039736 main.go:141] libmachine: Parsing certificate...
	I0127 11:57:43.921306 1039736 cli_runner.go:164] Run: docker network inspect scheduled-stop-748426 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 11:57:43.937065 1039736 cli_runner.go:211] docker network inspect scheduled-stop-748426 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 11:57:43.937148 1039736 network_create.go:284] running [docker network inspect scheduled-stop-748426] to gather additional debugging logs...
	I0127 11:57:43.937164 1039736 cli_runner.go:164] Run: docker network inspect scheduled-stop-748426
	W0127 11:57:43.953021 1039736 cli_runner.go:211] docker network inspect scheduled-stop-748426 returned with exit code 1
	I0127 11:57:43.953109 1039736 network_create.go:287] error running [docker network inspect scheduled-stop-748426]: docker network inspect scheduled-stop-748426: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-748426 not found
	I0127 11:57:43.953121 1039736 network_create.go:289] output of [docker network inspect scheduled-stop-748426]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-748426 not found
	
	** /stderr **
	I0127 11:57:43.953220 1039736 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:57:43.970041 1039736 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2217238752e2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:92:9d:42:1b} reservation:<nil>}
	I0127 11:57:43.970379 1039736 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2670da9d45c0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:22:55:9f:e0} reservation:<nil>}
	I0127 11:57:43.970693 1039736 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b00ab774f07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:4a:c8:c0:d6} reservation:<nil>}
	I0127 11:57:43.971108 1039736 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001949600}
	I0127 11:57:43.971124 1039736 network_create.go:124] attempt to create docker network scheduled-stop-748426 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 11:57:43.971178 1039736 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-748426 scheduled-stop-748426
	I0127 11:57:44.051212 1039736 network_create.go:108] docker network scheduled-stop-748426 192.168.76.0/24 created
	I0127 11:57:44.051236 1039736 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-748426" container
	I0127 11:57:44.051330 1039736 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 11:57:44.067426 1039736 cli_runner.go:164] Run: docker volume create scheduled-stop-748426 --label name.minikube.sigs.k8s.io=scheduled-stop-748426 --label created_by.minikube.sigs.k8s.io=true
	I0127 11:57:44.094303 1039736 oci.go:103] Successfully created a docker volume scheduled-stop-748426
	I0127 11:57:44.094413 1039736 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-748426-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-748426 --entrypoint /usr/bin/test -v scheduled-stop-748426:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0127 11:57:44.670543 1039736 oci.go:107] Successfully prepared a docker volume scheduled-stop-748426
	I0127 11:57:44.670583 1039736 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:57:44.670602 1039736 kic.go:194] Starting extracting preloaded images to volume ...
	I0127 11:57:44.670668 1039736 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-748426:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 11:57:49.115797 1039736 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-748426:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445096218s)
	I0127 11:57:49.115817 1039736 kic.go:203] duration metric: took 4.445212187s to extract preloaded images to volume ...
	W0127 11:57:49.115961 1039736 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0127 11:57:49.116056 1039736 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 11:57:49.173963 1039736 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-748426 --name scheduled-stop-748426 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-748426 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-748426 --network scheduled-stop-748426 --ip 192.168.76.2 --volume scheduled-stop-748426:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0127 11:57:49.533343 1039736 cli_runner.go:164] Run: docker container inspect scheduled-stop-748426 --format={{.State.Running}}
	I0127 11:57:49.555679 1039736 cli_runner.go:164] Run: docker container inspect scheduled-stop-748426 --format={{.State.Status}}
	I0127 11:57:49.580083 1039736 cli_runner.go:164] Run: docker exec scheduled-stop-748426 stat /var/lib/dpkg/alternatives/iptables
	I0127 11:57:49.629841 1039736 oci.go:144] the created container "scheduled-stop-748426" has a running status.
	I0127 11:57:49.629861 1039736 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa...
	I0127 11:57:50.740310 1039736 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 11:57:50.766533 1039736 cli_runner.go:164] Run: docker container inspect scheduled-stop-748426 --format={{.State.Status}}
	I0127 11:57:50.786066 1039736 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 11:57:50.786078 1039736 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-748426 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 11:57:50.825983 1039736 cli_runner.go:164] Run: docker container inspect scheduled-stop-748426 --format={{.State.Status}}
	I0127 11:57:50.843719 1039736 machine.go:93] provisionDockerMachine start ...
	I0127 11:57:50.843812 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:50.860742 1039736 main.go:141] libmachine: Using SSH client type: native
	I0127 11:57:50.861143 1039736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I0127 11:57:50.861151 1039736 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:57:50.986749 1039736 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-748426
	
	I0127 11:57:50.986766 1039736 ubuntu.go:169] provisioning hostname "scheduled-stop-748426"
	I0127 11:57:50.986842 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:51.008361 1039736 main.go:141] libmachine: Using SSH client type: native
	I0127 11:57:51.008626 1039736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I0127 11:57:51.008636 1039736 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-748426 && echo "scheduled-stop-748426" | sudo tee /etc/hostname
	I0127 11:57:51.145599 1039736 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-748426
	
	I0127 11:57:51.145683 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:51.164596 1039736 main.go:141] libmachine: Using SSH client type: native
	I0127 11:57:51.164834 1039736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I0127 11:57:51.164851 1039736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-748426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-748426/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-748426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:57:51.289004 1039736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:57:51.289022 1039736 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20318-888339/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-888339/.minikube}
	I0127 11:57:51.289067 1039736 ubuntu.go:177] setting up certificates
	I0127 11:57:51.289075 1039736 provision.go:84] configureAuth start
	I0127 11:57:51.289134 1039736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-748426
	I0127 11:57:51.306076 1039736 provision.go:143] copyHostCerts
	I0127 11:57:51.306133 1039736 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem, removing ...
	I0127 11:57:51.306140 1039736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem
	I0127 11:57:51.306220 1039736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem (1082 bytes)
	I0127 11:57:51.306337 1039736 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem, removing ...
	I0127 11:57:51.306341 1039736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem
	I0127 11:57:51.306370 1039736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem (1123 bytes)
	I0127 11:57:51.306433 1039736 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem, removing ...
	I0127 11:57:51.306436 1039736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem
	I0127 11:57:51.306458 1039736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem (1675 bytes)
	I0127 11:57:51.306501 1039736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-748426 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-748426]
	I0127 11:57:51.538554 1039736 provision.go:177] copyRemoteCerts
	I0127 11:57:51.538607 1039736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:57:51.538648 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:51.555438 1039736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa Username:docker}
	I0127 11:57:51.646064 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 11:57:51.672038 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:57:51.696333 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 11:57:51.720358 1039736 provision.go:87] duration metric: took 431.270556ms to configureAuth
	I0127 11:57:51.720375 1039736 ubuntu.go:193] setting minikube options for container-runtime
	I0127 11:57:51.720562 1039736 config.go:182] Loaded profile config "scheduled-stop-748426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:57:51.720568 1039736 machine.go:96] duration metric: took 876.839617ms to provisionDockerMachine
	I0127 11:57:51.720573 1039736 client.go:171] duration metric: took 7.799866955s to LocalClient.Create
	I0127 11:57:51.720585 1039736 start.go:167] duration metric: took 7.799921436s to libmachine.API.Create "scheduled-stop-748426"
	I0127 11:57:51.720592 1039736 start.go:293] postStartSetup for "scheduled-stop-748426" (driver="docker")
	I0127 11:57:51.720600 1039736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:57:51.720647 1039736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:57:51.720683 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:51.737148 1039736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa Username:docker}
	I0127 11:57:51.826025 1039736 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:57:51.828869 1039736 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 11:57:51.828900 1039736 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 11:57:51.828909 1039736 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 11:57:51.828915 1039736 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 11:57:51.828925 1039736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-888339/.minikube/addons for local assets ...
	I0127 11:57:51.828984 1039736 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-888339/.minikube/files for local assets ...
	I0127 11:57:51.829093 1039736 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem -> 8937152.pem in /etc/ssl/certs
	I0127 11:57:51.829197 1039736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:57:51.837739 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem --> /etc/ssl/certs/8937152.pem (1708 bytes)
	I0127 11:57:51.862031 1039736 start.go:296] duration metric: took 141.42475ms for postStartSetup
	I0127 11:57:51.862401 1039736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-748426
	I0127 11:57:51.879043 1039736 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/config.json ...
	I0127 11:57:51.879322 1039736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:57:51.879363 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:51.895400 1039736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa Username:docker}
	I0127 11:57:51.981563 1039736 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 11:57:51.985665 1039736 start.go:128] duration metric: took 8.068369084s to createHost
	I0127 11:57:51.985680 1039736 start.go:83] releasing machines lock for "scheduled-stop-748426", held for 8.068487064s
	I0127 11:57:51.985746 1039736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-748426
	I0127 11:57:52.003269 1039736 ssh_runner.go:195] Run: cat /version.json
	I0127 11:57:52.003321 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:52.003579 1039736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 11:57:52.003647 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:57:52.034805 1039736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa Username:docker}
	I0127 11:57:52.041125 1039736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa Username:docker}
	I0127 11:57:52.120555 1039736 ssh_runner.go:195] Run: systemctl --version
	I0127 11:57:52.249742 1039736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:57:52.253950 1039736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 11:57:52.278869 1039736 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 11:57:52.278938 1039736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:57:52.309082 1039736 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 11:57:52.309097 1039736 start.go:495] detecting cgroup driver to use...
	I0127 11:57:52.309128 1039736 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 11:57:52.309179 1039736 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:57:52.321894 1039736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:57:52.333629 1039736 docker.go:217] disabling cri-docker service (if available) ...
	I0127 11:57:52.333685 1039736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 11:57:52.348093 1039736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 11:57:52.363817 1039736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 11:57:52.455143 1039736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 11:57:52.557989 1039736 docker.go:233] disabling docker service ...
	I0127 11:57:52.558046 1039736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 11:57:52.579455 1039736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 11:57:52.591139 1039736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 11:57:52.674404 1039736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 11:57:52.766168 1039736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 11:57:52.777539 1039736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:57:52.794222 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 11:57:52.803979 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:57:52.814099 1039736 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:57:52.814161 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:57:52.823667 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:57:52.833080 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:57:52.842587 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:57:52.851948 1039736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:57:52.860845 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:57:52.870713 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:57:52.879838 1039736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:57:52.889189 1039736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:57:52.897638 1039736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:57:52.905776 1039736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:57:52.997475 1039736 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:57:53.132368 1039736 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 11:57:53.132430 1039736 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 11:57:53.136130 1039736 start.go:563] Will wait 60s for crictl version
	I0127 11:57:53.136183 1039736 ssh_runner.go:195] Run: which crictl
	I0127 11:57:53.139400 1039736 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:57:53.174276 1039736 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0127 11:57:53.174338 1039736 ssh_runner.go:195] Run: containerd --version
	I0127 11:57:53.198966 1039736 ssh_runner.go:195] Run: containerd --version
	I0127 11:57:53.229578 1039736 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
	I0127 11:57:53.232260 1039736 cli_runner.go:164] Run: docker network inspect scheduled-stop-748426 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 11:57:53.248172 1039736 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 11:57:53.251873 1039736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:57:53.262095 1039736 kubeadm.go:883] updating cluster {Name:scheduled-stop-748426 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-748426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:57:53.262210 1039736 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:57:53.262268 1039736 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:57:53.296767 1039736 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 11:57:53.296779 1039736 containerd.go:534] Images already preloaded, skipping extraction
	I0127 11:57:53.296840 1039736 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 11:57:53.330834 1039736 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 11:57:53.330847 1039736 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:57:53.330854 1039736 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.1 containerd true true} ...
	I0127 11:57:53.330942 1039736 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-748426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-748426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:57:53.331006 1039736 ssh_runner.go:195] Run: sudo crictl info
	I0127 11:57:53.372629 1039736 cni.go:84] Creating CNI manager for ""
	I0127 11:57:53.372640 1039736 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 11:57:53.372653 1039736 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:57:53.372674 1039736 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-748426 NodeName:scheduled-stop-748426 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:57:53.372790 1039736 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "scheduled-stop-748426"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:57:53.372858 1039736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:57:53.381795 1039736 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:57:53.381856 1039736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 11:57:53.390675 1039736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0127 11:57:53.408421 1039736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:57:53.426395 1039736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2311 bytes)
	I0127 11:57:53.444172 1039736 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 11:57:53.447705 1039736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:57:53.458211 1039736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:57:53.536768 1039736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:57:53.552419 1039736 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426 for IP: 192.168.76.2
	I0127 11:57:53.552430 1039736 certs.go:194] generating shared ca certs ...
	I0127 11:57:53.552444 1039736 certs.go:226] acquiring lock for ca certs: {Name:mke15f79704ae0e83f911aa0e3f9c4b862da9341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:53.552572 1039736 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-888339/.minikube/ca.key
	I0127 11:57:53.552612 1039736 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.key
	I0127 11:57:53.552617 1039736 certs.go:256] generating profile certs ...
	I0127 11:57:53.552669 1039736 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/client.key
	I0127 11:57:53.552686 1039736 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/client.crt with IP's: []
	I0127 11:57:54.256278 1039736 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/client.crt ...
	I0127 11:57:54.256293 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/client.crt: {Name:mk00473d4059a4eaf568e46c3f14ed7b30ed3260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:54.256475 1039736 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/client.key ...
	I0127 11:57:54.256482 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/client.key: {Name:mk2ff30c241cd8c08021624122d9a51abf04667c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:54.256562 1039736 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.key.61a3669b
	I0127 11:57:54.256575 1039736 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.crt.61a3669b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0127 11:57:54.699851 1039736 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.crt.61a3669b ...
	I0127 11:57:54.699867 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.crt.61a3669b: {Name:mkea04e822e41f1cc4061560bee5c60f81a75c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:54.700059 1039736 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.key.61a3669b ...
	I0127 11:57:54.700067 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.key.61a3669b: {Name:mkd87eaec3e4d85033834570fb31d33123fc7c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:54.700186 1039736 certs.go:381] copying /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.crt.61a3669b -> /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.crt
	I0127 11:57:54.700264 1039736 certs.go:385] copying /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.key.61a3669b -> /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.key
	I0127 11:57:54.700322 1039736 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.key
	I0127 11:57:54.700335 1039736 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.crt with IP's: []
	I0127 11:57:55.805953 1039736 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.crt ...
	I0127 11:57:55.805971 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.crt: {Name:mk9c74d7f0e1ab1fd77138e75cbacd1d9b873b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:55.806168 1039736 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.key ...
	I0127 11:57:55.806175 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.key: {Name:mk080b29214fa746515e3b62380eec3a05495edf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:57:55.806380 1039736 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715.pem (1338 bytes)
	W0127 11:57:55.806417 1039736 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715_empty.pem, impossibly tiny 0 bytes
	I0127 11:57:55.806425 1039736 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 11:57:55.806448 1039736 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem (1082 bytes)
	I0127 11:57:55.806470 1039736 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem (1123 bytes)
	I0127 11:57:55.806491 1039736 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem (1675 bytes)
	I0127 11:57:55.806530 1039736 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem (1708 bytes)
	I0127 11:57:55.807149 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:57:55.834707 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 11:57:55.859387 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:57:55.884106 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:57:55.907962 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 11:57:55.932389 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:57:55.957103 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:57:55.981805 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/scheduled-stop-748426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:57:56.007524 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715.pem --> /usr/share/ca-certificates/893715.pem (1338 bytes)
	I0127 11:57:56.034843 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem --> /usr/share/ca-certificates/8937152.pem (1708 bytes)
	I0127 11:57:56.060264 1039736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:57:56.085175 1039736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:57:56.103726 1039736 ssh_runner.go:195] Run: openssl version
	I0127 11:57:56.109402 1039736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/893715.pem && ln -fs /usr/share/ca-certificates/893715.pem /etc/ssl/certs/893715.pem"
	I0127 11:57:56.119068 1039736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/893715.pem
	I0127 11:57:56.122759 1039736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:31 /usr/share/ca-certificates/893715.pem
	I0127 11:57:56.122816 1039736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/893715.pem
	I0127 11:57:56.129750 1039736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/893715.pem /etc/ssl/certs/51391683.0"
	I0127 11:57:56.139005 1039736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8937152.pem && ln -fs /usr/share/ca-certificates/8937152.pem /etc/ssl/certs/8937152.pem"
	I0127 11:57:56.148221 1039736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937152.pem
	I0127 11:57:56.151586 1039736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:31 /usr/share/ca-certificates/8937152.pem
	I0127 11:57:56.151649 1039736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937152.pem
	I0127 11:57:56.158493 1039736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8937152.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:57:56.167885 1039736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:57:56.177159 1039736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:57:56.180774 1039736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:23 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:57:56.180829 1039736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:57:56.187838 1039736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:57:56.197541 1039736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:57:56.200750 1039736 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:57:56.200793 1039736 kubeadm.go:392] StartCluster: {Name:scheduled-stop-748426 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:scheduled-stop-748426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:57:56.200862 1039736 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 11:57:56.200924 1039736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 11:57:56.237132 1039736 cri.go:89] found id: ""
	I0127 11:57:56.237195 1039736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:57:56.246075 1039736 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:57:56.254898 1039736 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0127 11:57:56.254954 1039736 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:57:56.263472 1039736 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:57:56.263482 1039736 kubeadm.go:157] found existing configuration files:
	
	I0127 11:57:56.263538 1039736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:57:56.272255 1039736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:57:56.272314 1039736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:57:56.280660 1039736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:57:56.289195 1039736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:57:56.289251 1039736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:57:56.297800 1039736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:57:56.306585 1039736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:57:56.306647 1039736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:57:56.314750 1039736 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:57:56.323313 1039736 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:57:56.323369 1039736 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:57:56.331779 1039736 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 11:57:56.392369 1039736 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0127 11:57:56.392612 1039736 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-aws\n", err: exit status 1
	I0127 11:57:56.467370 1039736 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:58:11.297284 1039736 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:58:11.297338 1039736 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:58:11.297432 1039736 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0127 11:58:11.297487 1039736 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-aws
	I0127 11:58:11.297521 1039736 kubeadm.go:310] OS: Linux
	I0127 11:58:11.297566 1039736 kubeadm.go:310] CGROUPS_CPU: enabled
	I0127 11:58:11.297614 1039736 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0127 11:58:11.297660 1039736 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0127 11:58:11.297708 1039736 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0127 11:58:11.297755 1039736 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0127 11:58:11.297803 1039736 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0127 11:58:11.297848 1039736 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0127 11:58:11.297895 1039736 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0127 11:58:11.297941 1039736 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0127 11:58:11.298013 1039736 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:58:11.298107 1039736 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:58:11.298196 1039736 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:58:11.298258 1039736 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:58:11.301091 1039736 out.go:235]   - Generating certificates and keys ...
	I0127 11:58:11.301183 1039736 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:58:11.301244 1039736 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:58:11.301310 1039736 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:58:11.301366 1039736 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:58:11.301425 1039736 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:58:11.301475 1039736 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:58:11.301528 1039736 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:58:11.301651 1039736 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-748426] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 11:58:11.301703 1039736 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:58:11.301823 1039736 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-748426] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 11:58:11.301887 1039736 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:58:11.301950 1039736 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:58:11.301993 1039736 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:58:11.302047 1039736 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:58:11.302097 1039736 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:58:11.302152 1039736 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:58:11.302204 1039736 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:58:11.302266 1039736 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:58:11.302319 1039736 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:58:11.302400 1039736 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:58:11.302465 1039736 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:58:11.305152 1039736 out.go:235]   - Booting up control plane ...
	I0127 11:58:11.305263 1039736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:58:11.305352 1039736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:58:11.305422 1039736 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:58:11.305523 1039736 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:58:11.305606 1039736 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:58:11.305644 1039736 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:58:11.305795 1039736 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:58:11.305912 1039736 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:58:11.305986 1039736 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000815141s
	I0127 11:58:11.306058 1039736 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:58:11.306125 1039736 kubeadm.go:310] [api-check] The API server is healthy after 6.001376326s
	I0127 11:58:11.306254 1039736 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:58:11.306386 1039736 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:58:11.306445 1039736 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:58:11.306637 1039736 kubeadm.go:310] [mark-control-plane] Marking the node scheduled-stop-748426 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:58:11.306693 1039736 kubeadm.go:310] [bootstrap-token] Using token: jegsuq.qjjdrkcunejmj7k5
	I0127 11:58:11.309387 1039736 out.go:235]   - Configuring RBAC rules ...
	I0127 11:58:11.309508 1039736 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:58:11.309641 1039736 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:58:11.309798 1039736 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:58:11.309952 1039736 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:58:11.310100 1039736 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:58:11.310193 1039736 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:58:11.310314 1039736 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:58:11.310376 1039736 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:58:11.310434 1039736 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:58:11.310438 1039736 kubeadm.go:310] 
	I0127 11:58:11.310504 1039736 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:58:11.310507 1039736 kubeadm.go:310] 
	I0127 11:58:11.310582 1039736 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:58:11.310585 1039736 kubeadm.go:310] 
	I0127 11:58:11.310613 1039736 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:58:11.310670 1039736 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:58:11.310719 1039736 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:58:11.310722 1039736 kubeadm.go:310] 
	I0127 11:58:11.310774 1039736 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:58:11.310781 1039736 kubeadm.go:310] 
	I0127 11:58:11.310827 1039736 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:58:11.310830 1039736 kubeadm.go:310] 
	I0127 11:58:11.310881 1039736 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:58:11.310954 1039736 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:58:11.311021 1039736 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:58:11.311024 1039736 kubeadm.go:310] 
	I0127 11:58:11.311107 1039736 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:58:11.311182 1039736 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:58:11.311186 1039736 kubeadm.go:310] 
	I0127 11:58:11.311271 1039736 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jegsuq.qjjdrkcunejmj7k5 \
	I0127 11:58:11.311372 1039736 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ab77ca629af18522722058cf6e6b9d1dd63a614828aa0be2683e90565b703f3c \
	I0127 11:58:11.311393 1039736 kubeadm.go:310] 	--control-plane 
	I0127 11:58:11.311397 1039736 kubeadm.go:310] 
	I0127 11:58:11.311480 1039736 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:58:11.311483 1039736 kubeadm.go:310] 
	I0127 11:58:11.311564 1039736 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jegsuq.qjjdrkcunejmj7k5 \
	I0127 11:58:11.311678 1039736 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ab77ca629af18522722058cf6e6b9d1dd63a614828aa0be2683e90565b703f3c 
	I0127 11:58:11.311686 1039736 cni.go:84] Creating CNI manager for ""
	I0127 11:58:11.311692 1039736 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 11:58:11.314500 1039736 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 11:58:11.317107 1039736 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 11:58:11.321118 1039736 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 11:58:11.321129 1039736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 11:58:11.340712 1039736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 11:58:11.622312 1039736 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:58:11.622427 1039736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:58:11.622493 1039736 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-748426 minikube.k8s.io/updated_at=2025_01_27T11_58_11_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=scheduled-stop-748426 minikube.k8s.io/primary=true
	I0127 11:58:11.793507 1039736 ops.go:34] apiserver oom_adj: -16
	I0127 11:58:11.793525 1039736 kubeadm.go:1113] duration metric: took 171.147367ms to wait for elevateKubeSystemPrivileges
	I0127 11:58:11.793536 1039736 kubeadm.go:394] duration metric: took 15.592749035s to StartCluster
	I0127 11:58:11.793551 1039736 settings.go:142] acquiring lock: {Name:mk8e4620a376eeb900823ad35149c0dd6d301c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:58:11.793612 1039736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 11:58:11.794281 1039736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/kubeconfig: {Name:mk75ddd380b783b9f157e482ffdcc29dbd635876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:58:11.794468 1039736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 11:58:11.794573 1039736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:58:11.794852 1039736 config.go:182] Loaded profile config "scheduled-stop-748426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:58:11.794887 1039736 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:58:11.794947 1039736 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-748426"
	I0127 11:58:11.794962 1039736 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-748426"
	I0127 11:58:11.794984 1039736 host.go:66] Checking if "scheduled-stop-748426" exists ...
	I0127 11:58:11.795460 1039736 cli_runner.go:164] Run: docker container inspect scheduled-stop-748426 --format={{.State.Status}}
	I0127 11:58:11.795714 1039736 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-748426"
	I0127 11:58:11.795729 1039736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-748426"
	I0127 11:58:11.796008 1039736 cli_runner.go:164] Run: docker container inspect scheduled-stop-748426 --format={{.State.Status}}
	I0127 11:58:11.799515 1039736 out.go:177] * Verifying Kubernetes components...
	I0127 11:58:11.806739 1039736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:58:11.842570 1039736 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-748426"
	I0127 11:58:11.842599 1039736 host.go:66] Checking if "scheduled-stop-748426" exists ...
	I0127 11:58:11.843009 1039736 cli_runner.go:164] Run: docker container inspect scheduled-stop-748426 --format={{.State.Status}}
	I0127 11:58:11.851421 1039736 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:58:11.854131 1039736 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:58:11.854141 1039736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:58:11.854206 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:58:11.891896 1039736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa Username:docker}
	I0127 11:58:11.892262 1039736 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:58:11.892269 1039736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:58:11.892325 1039736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-748426
	I0127 11:58:11.923185 1039736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/scheduled-stop-748426/id_rsa Username:docker}
	I0127 11:58:12.065929 1039736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:58:12.106662 1039736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:58:12.106974 1039736 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:58:12.145000 1039736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:58:12.499769 1039736 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0127 11:58:12.501380 1039736 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:58:12.501433 1039736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:58:12.702443 1039736 api_server.go:72] duration metric: took 907.949247ms to wait for apiserver process to appear ...
	I0127 11:58:12.702454 1039736 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:58:12.702471 1039736 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 11:58:12.705254 1039736 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 11:58:12.707989 1039736 addons.go:514] duration metric: took 913.083247ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0127 11:58:12.712760 1039736 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 11:58:12.713826 1039736 api_server.go:141] control plane version: v1.32.1
	I0127 11:58:12.713840 1039736 api_server.go:131] duration metric: took 11.380954ms to wait for apiserver health ...
	I0127 11:58:12.713847 1039736 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:58:12.720010 1039736 system_pods.go:59] 5 kube-system pods found
	I0127 11:58:12.720030 1039736 system_pods.go:61] "etcd-scheduled-stop-748426" [6086d396-6517-4117-a4c4-d40c8e9cc36a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 11:58:12.720039 1039736 system_pods.go:61] "kube-apiserver-scheduled-stop-748426" [c865c1e1-0032-477f-9637-85739079d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 11:58:12.720046 1039736 system_pods.go:61] "kube-controller-manager-scheduled-stop-748426" [181345e9-32cc-478d-b1e1-c00e42c5b842] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 11:58:12.720053 1039736 system_pods.go:61] "kube-scheduler-scheduled-stop-748426" [d60f2708-d555-4717-98ab-8502c843a7fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 11:58:12.720058 1039736 system_pods.go:61] "storage-provisioner" [af66dd39-9188-415b-9a71-7dac74b7cc9d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0127 11:58:12.720063 1039736 system_pods.go:74] duration metric: took 6.211212ms to wait for pod list to return data ...
	I0127 11:58:12.720073 1039736 kubeadm.go:582] duration metric: took 925.586474ms to wait for: map[apiserver:true system_pods:true]
	I0127 11:58:12.720086 1039736 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:58:12.723293 1039736 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0127 11:58:12.723311 1039736 node_conditions.go:123] node cpu capacity is 2
	I0127 11:58:12.723320 1039736 node_conditions.go:105] duration metric: took 3.230518ms to run NodePressure ...
	I0127 11:58:12.723330 1039736 start.go:241] waiting for startup goroutines ...
	I0127 11:58:13.013202 1039736 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-748426" context rescaled to 1 replicas
	I0127 11:58:13.013235 1039736 start.go:246] waiting for cluster config update ...
	I0127 11:58:13.013246 1039736 start.go:255] writing updated cluster config ...
	I0127 11:58:13.013631 1039736 ssh_runner.go:195] Run: rm -f paused
	I0127 11:58:13.072980 1039736 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 11:58:13.076172 1039736 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-748426" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a97b452b5e772       7fc9d4aa817aa       10 seconds ago      Running             etcd                      0                   1091e5ddc1a38       etcd-scheduled-stop-748426
	16510b6a7ac00       265c2dedf28ab       10 seconds ago      Running             kube-apiserver            0                   dafabe52ca89a       kube-apiserver-scheduled-stop-748426
	727a0847f087b       ddb38cac617cb       10 seconds ago      Running             kube-scheduler            0                   bdb352f4eef6d       kube-scheduler-scheduled-stop-748426
	537f10d28b292       2933761aa7ada       10 seconds ago      Running             kube-controller-manager   0                   02ec88019edd4       kube-controller-manager-scheduled-stop-748426
	
	
	==> containerd <==
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.190182980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.225252435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.225324466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.225337774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.225445391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.261289089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-scheduled-stop-748426,Uid:caa5ac4f85b15eaa3c74fad992f53917,Namespace:kube-system,Attempt:0,} returns sandbox id \"02ec88019edd46e5bdc80fdaaf98da7c791fc6349738f6571f3fb57afff2cbf9\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.265004848Z" level=info msg="CreateContainer within sandbox \"02ec88019edd46e5bdc80fdaaf98da7c791fc6349738f6571f3fb57afff2cbf9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.293320826Z" level=info msg="CreateContainer within sandbox \"02ec88019edd46e5bdc80fdaaf98da7c791fc6349738f6571f3fb57afff2cbf9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"537f10d28b292f4a6cfe37966bd1c067c3a75dc8c8809342cbfffc2f98f92dda\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.294206074Z" level=info msg="StartContainer for \"537f10d28b292f4a6cfe37966bd1c067c3a75dc8c8809342cbfffc2f98f92dda\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.298296159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-scheduled-stop-748426,Uid:afa3e820e8b9e990cade89a49848c8b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdb352f4eef6d3dce1efb0fee4298060a96b38738500aa3622e60326a6baf7b3\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.301000986Z" level=info msg="CreateContainer within sandbox \"bdb352f4eef6d3dce1efb0fee4298060a96b38738500aa3622e60326a6baf7b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.308123243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-scheduled-stop-748426,Uid:d0e1b7884e19e5db4cf9e2ac9cb71dc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dafabe52ca89a92f769ffb759f6116076cf99ac7fb4ce4499aca33b4de449332\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.313269609Z" level=info msg="CreateContainer within sandbox \"dafabe52ca89a92f769ffb759f6116076cf99ac7fb4ce4499aca33b4de449332\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.325625573Z" level=info msg="CreateContainer within sandbox \"bdb352f4eef6d3dce1efb0fee4298060a96b38738500aa3622e60326a6baf7b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"727a0847f087bcdbf434e1c7383741df9ddec84b07c5db8e59c53a92c20309ab\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.326284300Z" level=info msg="StartContainer for \"727a0847f087bcdbf434e1c7383741df9ddec84b07c5db8e59c53a92c20309ab\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.350733424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-scheduled-stop-748426,Uid:0fca17c6874e88feec826e2e1a723fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1091e5ddc1a38cc7e318ff82f92cfc15a52345a82f102639ae65f6515f6b3e2c\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.354675065Z" level=info msg="CreateContainer within sandbox \"dafabe52ca89a92f769ffb759f6116076cf99ac7fb4ce4499aca33b4de449332\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"16510b6a7ac00f01ced6eed0dd6556db6e2eaa8f4a56311a43950b7aa4a3c0ab\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.355087823Z" level=info msg="StartContainer for \"16510b6a7ac00f01ced6eed0dd6556db6e2eaa8f4a56311a43950b7aa4a3c0ab\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.356970392Z" level=info msg="CreateContainer within sandbox \"1091e5ddc1a38cc7e318ff82f92cfc15a52345a82f102639ae65f6515f6b3e2c\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.390971365Z" level=info msg="CreateContainer within sandbox \"1091e5ddc1a38cc7e318ff82f92cfc15a52345a82f102639ae65f6515f6b3e2c\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"a97b452b5e772196332a5154a75e1664b70a4eab6fca5fcc659aa060e1f02e83\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.391458148Z" level=info msg="StartContainer for \"a97b452b5e772196332a5154a75e1664b70a4eab6fca5fcc659aa060e1f02e83\""
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.443585168Z" level=info msg="StartContainer for \"537f10d28b292f4a6cfe37966bd1c067c3a75dc8c8809342cbfffc2f98f92dda\" returns successfully"
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.474556907Z" level=info msg="StartContainer for \"727a0847f087bcdbf434e1c7383741df9ddec84b07c5db8e59c53a92c20309ab\" returns successfully"
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.537863432Z" level=info msg="StartContainer for \"16510b6a7ac00f01ced6eed0dd6556db6e2eaa8f4a56311a43950b7aa4a3c0ab\" returns successfully"
	Jan 27 11:58:04 scheduled-stop-748426 containerd[832]: time="2025-01-27T11:58:04.643130380Z" level=info msg="StartContainer for \"a97b452b5e772196332a5154a75e1664b70a4eab6fca5fcc659aa060e1f02e83\" returns successfully"
	
	
	==> describe nodes <==
	Name:               scheduled-stop-748426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-748426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=scheduled-stop-748426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_58_11_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:58:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-748426
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:58:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:58:08 +0000   Mon, 27 Jan 2025 11:58:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:58:08 +0000   Mon, 27 Jan 2025 11:58:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:58:08 +0000   Mon, 27 Jan 2025 11:58:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:58:08 +0000   Mon, 27 Jan 2025 11:58:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-748426
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 85987e9b0c0e490193771bc39c9df73a
	  System UUID:                8704061c-637c-4267-8f48-5186d048eba7
	  Boot ID:                    9a2b5a8b-82ce-43cf-92bd-6297263d30a0
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	Non-terminated Pods:          (5 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-748426                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-748426             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-controller-manager-scheduled-stop-748426    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-748426             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 storage-provisioner                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 4s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 4s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4s    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4s    kubelet          Node scheduled-stop-748426 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s    kubelet          Node scheduled-stop-748426 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s    kubelet          Node scheduled-stop-748426 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           0s    node-controller  Node scheduled-stop-748426 event: Registered Node scheduled-stop-748426 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [a97b452b5e772196332a5154a75e1664b70a4eab6fca5fcc659aa060e1f02e83] <==
	{"level":"info","ts":"2025-01-27T11:58:04.761320Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T11:58:04.762050Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T11:58:04.762207Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T11:58:04.762694Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-01-27T11:58:04.763106Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-01-27T11:58:04.921068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-01-27T11:58:04.921291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-01-27T11:58:04.921412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-01-27T11:58:04.921530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-01-27T11:58:04.921655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-01-27T11:58:04.921745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-01-27T11:58:04.921837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-01-27T11:58:04.925209Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:scheduled-stop-748426 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T11:58:04.925541Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:58:04.925758Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:58:04.925967Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T11:58:04.926060Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T11:58:04.926168Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T11:58:04.926957Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T11:58:04.927831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-01-27T11:58:04.937122Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:58:04.937445Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:58:04.937611Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T11:58:04.939347Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T11:58:04.949770Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:58:15 up  4:40,  0 users,  load average: 1.68, 1.85, 2.33
	Linux scheduled-stop-748426 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [16510b6a7ac00f01ced6eed0dd6556db6e2eaa8f4a56311a43950b7aa4a3c0ab] <==
	I0127 11:58:08.237378       1 cache.go:39] Caches are synced for autoregister controller
	I0127 11:58:08.259120       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 11:58:08.262438       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 11:58:08.262467       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 11:58:08.262680       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 11:58:08.263111       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 11:58:08.281088       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 11:58:08.281277       1 policy_source.go:240] refreshing policies
	E0127 11:58:08.311524       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0127 11:58:08.321181       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0127 11:58:08.352658       1 controller.go:615] quota admission added evaluator for: namespaces
	I0127 11:58:08.521437       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 11:58:08.936129       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0127 11:58:08.943587       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0127 11:58:08.943609       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 11:58:09.597325       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 11:58:09.644166       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 11:58:09.778161       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0127 11:58:09.785417       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 11:58:09.786635       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 11:58:09.791696       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 11:58:10.266697       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 11:58:10.707690       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 11:58:10.720434       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 11:58:10.731869       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [537f10d28b292f4a6cfe37966bd1c067c3a75dc8c8809342cbfffc2f98f92dda] <==
	I0127 11:58:14.826509       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 11:58:14.846378       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 11:58:14.857326       1 shared_informer.go:320] Caches are synced for node
	I0127 11:58:14.857443       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 11:58:14.857495       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 11:58:14.857501       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 11:58:14.857507       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 11:58:14.860493       1 shared_informer.go:320] Caches are synced for HPA
	I0127 11:58:14.862716       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 11:58:14.862755       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 11:58:14.862779       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 11:58:14.862796       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 11:58:14.863380       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 11:58:14.863572       1 shared_informer.go:320] Caches are synced for service account
	I0127 11:58:14.863767       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 11:58:14.863945       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 11:58:14.866173       1 shared_informer.go:320] Caches are synced for deployment
	I0127 11:58:14.871861       1 shared_informer.go:320] Caches are synced for job
	I0127 11:58:14.873128       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 11:58:14.880180       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 11:58:14.888365       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 11:58:14.894435       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-748426" podCIDRs=["10.244.0.0/24"]
	I0127 11:58:14.894672       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-748426"
	I0127 11:58:14.894777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="scheduled-stop-748426"
	I0127 11:58:14.897181       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-scheduler [727a0847f087bcdbf434e1c7383741df9ddec84b07c5db8e59c53a92c20309ab] <==
	W0127 11:58:08.715709       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:58:08.715735       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.715824       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 11:58:08.715847       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.716030       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 11:58:08.716054       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.716106       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 11:58:08.716126       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.716193       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 11:58:08.716214       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.716272       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 11:58:08.716291       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.716352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:58:08.716371       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.716451       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 11:58:08.716471       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.716525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 11:58:08.716542       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.711756       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 11:58:08.716589       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.717138       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 11:58:08.717163       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:58:08.717609       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 11:58:08.717635       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 11:58:10.317368       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059525    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/afa3e820e8b9e990cade89a49848c8b3-kubeconfig\") pod \"kube-scheduler-scheduled-stop-748426\" (UID: \"afa3e820e8b9e990cade89a49848c8b3\") " pod="kube-system/kube-scheduler-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059545    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0e1b7884e19e5db4cf9e2ac9cb71dc6-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-748426\" (UID: \"d0e1b7884e19e5db4cf9e2ac9cb71dc6\") " pod="kube-system/kube-apiserver-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059567    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0e1b7884e19e5db4cf9e2ac9cb71dc6-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-748426\" (UID: \"d0e1b7884e19e5db4cf9e2ac9cb71dc6\") " pod="kube-system/kube-apiserver-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059586    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/caa5ac4f85b15eaa3c74fad992f53917-ca-certs\") pod \"kube-controller-manager-scheduled-stop-748426\" (UID: \"caa5ac4f85b15eaa3c74fad992f53917\") " pod="kube-system/kube-controller-manager-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059611    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/caa5ac4f85b15eaa3c74fad992f53917-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-748426\" (UID: \"caa5ac4f85b15eaa3c74fad992f53917\") " pod="kube-system/kube-controller-manager-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059633    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/caa5ac4f85b15eaa3c74fad992f53917-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-748426\" (UID: \"caa5ac4f85b15eaa3c74fad992f53917\") " pod="kube-system/kube-controller-manager-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059653    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/0fca17c6874e88feec826e2e1a723fe3-etcd-certs\") pod \"etcd-scheduled-stop-748426\" (UID: \"0fca17c6874e88feec826e2e1a723fe3\") " pod="kube-system/etcd-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059670    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0e1b7884e19e5db4cf9e2ac9cb71dc6-ca-certs\") pod \"kube-apiserver-scheduled-stop-748426\" (UID: \"d0e1b7884e19e5db4cf9e2ac9cb71dc6\") " pod="kube-system/kube-apiserver-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059694    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0e1b7884e19e5db4cf9e2ac9cb71dc6-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-748426\" (UID: \"d0e1b7884e19e5db4cf9e2ac9cb71dc6\") " pod="kube-system/kube-apiserver-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059714    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/0fca17c6874e88feec826e2e1a723fe3-etcd-data\") pod \"etcd-scheduled-stop-748426\" (UID: \"0fca17c6874e88feec826e2e1a723fe3\") " pod="kube-system/etcd-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059733    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0e1b7884e19e5db4cf9e2ac9cb71dc6-k8s-certs\") pod \"kube-apiserver-scheduled-stop-748426\" (UID: \"d0e1b7884e19e5db4cf9e2ac9cb71dc6\") " pod="kube-system/kube-apiserver-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.059752    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/caa5ac4f85b15eaa3c74fad992f53917-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-748426\" (UID: \"caa5ac4f85b15eaa3c74fad992f53917\") " pod="kube-system/kube-controller-manager-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.622525    1535 apiserver.go:52] "Watching apiserver"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.631733    1535 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.742448    1535 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: E0127 11:58:11.753196    1535 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-748426\" already exists" pod="kube-system/etcd-scheduled-stop-748426"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.776305    1535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-748426" podStartSLOduration=1.7762733609999999 podStartE2EDuration="1.776273361s" podCreationTimestamp="2025-01-27 11:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 11:58:11.764027015 +0000 UTC m=+1.229960485" watchObservedRunningTime="2025-01-27 11:58:11.776273361 +0000 UTC m=+1.242206815"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.791513    1535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-748426" podStartSLOduration=1.791491597 podStartE2EDuration="1.791491597s" podCreationTimestamp="2025-01-27 11:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 11:58:11.77651498 +0000 UTC m=+1.242448434" watchObservedRunningTime="2025-01-27 11:58:11.791491597 +0000 UTC m=+1.257425050"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.817596    1535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-748426" podStartSLOduration=1.817579839 podStartE2EDuration="1.817579839s" podCreationTimestamp="2025-01-27 11:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 11:58:11.817377702 +0000 UTC m=+1.283311189" watchObservedRunningTime="2025-01-27 11:58:11.817579839 +0000 UTC m=+1.283513293"
	Jan 27 11:58:11 scheduled-stop-748426 kubelet[1535]: I0127 11:58:11.817784    1535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-748426" podStartSLOduration=2.81777803 podStartE2EDuration="2.81777803s" podCreationTimestamp="2025-01-27 11:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 11:58:11.791793736 +0000 UTC m=+1.257727288" watchObservedRunningTime="2025-01-27 11:58:11.81777803 +0000 UTC m=+1.283711492"
	Jan 27 11:58:14 scheduled-stop-748426 kubelet[1535]: I0127 11:58:14.901909    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dms8k\" (UniqueName: \"kubernetes.io/projected/af66dd39-9188-415b-9a71-7dac74b7cc9d-kube-api-access-dms8k\") pod \"storage-provisioner\" (UID: \"af66dd39-9188-415b-9a71-7dac74b7cc9d\") " pod="kube-system/storage-provisioner"
	Jan 27 11:58:14 scheduled-stop-748426 kubelet[1535]: I0127 11:58:14.901975    1535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af66dd39-9188-415b-9a71-7dac74b7cc9d-tmp\") pod \"storage-provisioner\" (UID: \"af66dd39-9188-415b-9a71-7dac74b7cc9d\") " pod="kube-system/storage-provisioner"
	Jan 27 11:58:15 scheduled-stop-748426 kubelet[1535]: E0127 11:58:15.020549    1535 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 27 11:58:15 scheduled-stop-748426 kubelet[1535]: E0127 11:58:15.020614    1535 projected.go:194] Error preparing data for projected volume kube-api-access-dms8k for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jan 27 11:58:15 scheduled-stop-748426 kubelet[1535]: E0127 11:58:15.020718    1535 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af66dd39-9188-415b-9a71-7dac74b7cc9d-kube-api-access-dms8k podName:af66dd39-9188-415b-9a71-7dac74b7cc9d nodeName:}" failed. No retries permitted until 2025-01-27 11:58:15.520688775 +0000 UTC m=+4.986622237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dms8k" (UniqueName: "kubernetes.io/projected/af66dd39-9188-415b-9a71-7dac74b7cc9d-kube-api-access-dms8k") pod "storage-provisioner" (UID: "af66dd39-9188-415b-9a71-7dac74b7cc9d") : configmap "kube-root-ca.crt" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-748426 -n scheduled-stop-748426
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-748426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kindnet-dtmxl kube-proxy-w8b5l storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-748426 describe pod kindnet-dtmxl kube-proxy-w8b5l storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-748426 describe pod kindnet-dtmxl kube-proxy-w8b5l storage-provisioner: exit status 1 (145.183246ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kindnet-dtmxl" not found
	Error from server (NotFound): pods "kube-proxy-w8b5l" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-748426 describe pod kindnet-dtmxl kube-proxy-w8b5l storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-748426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-748426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-748426: (2.117264971s)
--- FAIL: TestScheduledStopUnix (34.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-999803 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-999803 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.019417741s)

                                                
                                                
-- stdout --
	* [old-k8s-version-999803] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-999803" primary control-plane node in "old-k8s-version-999803" cluster
	* Pulling base image v0.0.46 ...
	* Restarting existing docker container for "old-k8s-version-999803" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-999803 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:09:23.286361 1099122 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:09:23.286558 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:23.286584 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:09:23.286606 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:23.286860 1099122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 12:09:23.287286 1099122 out.go:352] Setting JSON to false
	I0127 12:09:23.288289 1099122 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17509,"bootTime":1737962255,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 12:09:23.288392 1099122 start.go:139] virtualization:  
	I0127 12:09:23.290852 1099122 out.go:177] * [old-k8s-version-999803] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 12:09:23.292203 1099122 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:09:23.292982 1099122 notify.go:220] Checking for updates...
	I0127 12:09:23.295370 1099122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:09:23.297468 1099122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 12:09:23.298826 1099122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 12:09:23.300298 1099122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 12:09:23.301848 1099122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:09:23.304008 1099122 config.go:182] Loaded profile config "old-k8s-version-999803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 12:09:23.306489 1099122 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 12:09:23.308010 1099122 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:09:23.341499 1099122 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:09:23.341627 1099122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:09:23.415401 1099122 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-01-27 12:09:23.403497453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:09:23.415525 1099122 docker.go:318] overlay module found
	I0127 12:09:23.418120 1099122 out.go:177] * Using the docker driver based on existing profile
	I0127 12:09:23.420474 1099122 start.go:297] selected driver: docker
	I0127 12:09:23.420493 1099122 start.go:901] validating driver "docker" against &{Name:old-k8s-version-999803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-999803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:09:23.420649 1099122 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:09:23.421433 1099122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:09:23.505115 1099122 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:69 SystemTime:2025-01-27 12:09:23.4935221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:09:23.505497 1099122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:09:23.505529 1099122 cni.go:84] Creating CNI manager for ""
	I0127 12:09:23.505571 1099122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 12:09:23.505613 1099122 start.go:340] cluster config:
	{Name:old-k8s-version-999803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-999803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:09:23.508558 1099122 out.go:177] * Starting "old-k8s-version-999803" primary control-plane node in "old-k8s-version-999803" cluster
	I0127 12:09:23.511228 1099122 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 12:09:23.513953 1099122 out.go:177] * Pulling base image v0.0.46 ...
	I0127 12:09:23.516528 1099122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 12:09:23.516594 1099122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 12:09:23.516596 1099122 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 12:09:23.516603 1099122 cache.go:56] Caching tarball of preloaded images
	I0127 12:09:23.516765 1099122 preload.go:172] Found /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0127 12:09:23.516776 1099122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0127 12:09:23.516892 1099122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/config.json ...
	I0127 12:09:23.545993 1099122 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 12:09:23.546020 1099122 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 12:09:23.546038 1099122 cache.go:227] Successfully downloaded all kic artifacts
	I0127 12:09:23.546070 1099122 start.go:360] acquireMachinesLock for old-k8s-version-999803: {Name:mkb5231a94ab32a7dfe061b82488c125adaefd6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:23.546136 1099122 start.go:364] duration metric: took 43.109µs to acquireMachinesLock for "old-k8s-version-999803"
	I0127 12:09:23.546159 1099122 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:09:23.546171 1099122 fix.go:54] fixHost starting: 
	I0127 12:09:23.546437 1099122 cli_runner.go:164] Run: docker container inspect old-k8s-version-999803 --format={{.State.Status}}
	I0127 12:09:23.567440 1099122 fix.go:112] recreateIfNeeded on old-k8s-version-999803: state=Stopped err=<nil>
	W0127 12:09:23.567473 1099122 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:09:23.571027 1099122 out.go:177] * Restarting existing docker container for "old-k8s-version-999803" ...
	I0127 12:09:23.573591 1099122 cli_runner.go:164] Run: docker start old-k8s-version-999803
	I0127 12:09:23.938557 1099122 cli_runner.go:164] Run: docker container inspect old-k8s-version-999803 --format={{.State.Status}}
	I0127 12:09:23.963783 1099122 kic.go:430] container "old-k8s-version-999803" state is running.
	I0127 12:09:23.964166 1099122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-999803
	I0127 12:09:23.988419 1099122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/config.json ...
	I0127 12:09:23.988642 1099122 machine.go:93] provisionDockerMachine start ...
	I0127 12:09:23.988697 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:24.016470 1099122 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:24.016815 1099122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I0127 12:09:24.016837 1099122 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:09:24.017596 1099122 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0127 12:09:27.148911 1099122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-999803
	
	I0127 12:09:27.148938 1099122 ubuntu.go:169] provisioning hostname "old-k8s-version-999803"
	I0127 12:09:27.149017 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:27.182448 1099122 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:27.182744 1099122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I0127 12:09:27.182762 1099122 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-999803 && echo "old-k8s-version-999803" | sudo tee /etc/hostname
	I0127 12:09:27.321909 1099122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-999803
	
	I0127 12:09:27.322049 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:27.346273 1099122 main.go:141] libmachine: Using SSH client type: native
	I0127 12:09:27.346527 1099122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I0127 12:09:27.346555 1099122 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-999803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-999803/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-999803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:09:27.472924 1099122 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:09:27.472966 1099122 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20318-888339/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-888339/.minikube}
	I0127 12:09:27.472988 1099122 ubuntu.go:177] setting up certificates
	I0127 12:09:27.472998 1099122 provision.go:84] configureAuth start
	I0127 12:09:27.473097 1099122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-999803
	I0127 12:09:27.503556 1099122 provision.go:143] copyHostCerts
	I0127 12:09:27.503619 1099122 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem, removing ...
	I0127 12:09:27.503627 1099122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem
	I0127 12:09:27.503698 1099122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem (1082 bytes)
	I0127 12:09:27.503797 1099122 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem, removing ...
	I0127 12:09:27.503802 1099122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem
	I0127 12:09:27.503830 1099122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem (1123 bytes)
	I0127 12:09:27.503887 1099122 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem, removing ...
	I0127 12:09:27.503892 1099122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem
	I0127 12:09:27.503948 1099122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem (1675 bytes)
	I0127 12:09:27.504000 1099122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-999803 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-999803]
	I0127 12:09:28.346559 1099122 provision.go:177] copyRemoteCerts
	I0127 12:09:28.346675 1099122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:09:28.346748 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:28.374775 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:28.474505 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:09:28.503951 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 12:09:28.536434 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:09:28.583474 1099122 provision.go:87] duration metric: took 1.110460237s to configureAuth
	I0127 12:09:28.583502 1099122 ubuntu.go:193] setting minikube options for container-runtime
	I0127 12:09:28.583694 1099122 config.go:182] Loaded profile config "old-k8s-version-999803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 12:09:28.583707 1099122 machine.go:96] duration metric: took 4.595057547s to provisionDockerMachine
	I0127 12:09:28.583715 1099122 start.go:293] postStartSetup for "old-k8s-version-999803" (driver="docker")
	I0127 12:09:28.583725 1099122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:09:28.583779 1099122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:09:28.583823 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:28.623674 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:28.729593 1099122 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:09:28.737167 1099122 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 12:09:28.737204 1099122 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 12:09:28.737215 1099122 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 12:09:28.737222 1099122 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 12:09:28.737235 1099122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-888339/.minikube/addons for local assets ...
	I0127 12:09:28.737295 1099122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-888339/.minikube/files for local assets ...
	I0127 12:09:28.737372 1099122 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem -> 8937152.pem in /etc/ssl/certs
	I0127 12:09:28.737475 1099122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:09:28.749110 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem --> /etc/ssl/certs/8937152.pem (1708 bytes)
	I0127 12:09:28.800541 1099122 start.go:296] duration metric: took 216.810668ms for postStartSetup
	I0127 12:09:28.800623 1099122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:09:28.800680 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:28.846652 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:28.966791 1099122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 12:09:28.986523 1099122 fix.go:56] duration metric: took 5.440348943s for fixHost
	I0127 12:09:28.986551 1099122 start.go:83] releasing machines lock for "old-k8s-version-999803", held for 5.440403367s
	I0127 12:09:28.986632 1099122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-999803
	I0127 12:09:29.043546 1099122 ssh_runner.go:195] Run: cat /version.json
	I0127 12:09:29.043609 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:29.043609 1099122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:09:29.043677 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:29.091513 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:29.101587 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:29.377301 1099122 ssh_runner.go:195] Run: systemctl --version
	I0127 12:09:29.382181 1099122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:09:29.389631 1099122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 12:09:29.409254 1099122 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 12:09:29.409328 1099122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:09:29.420280 1099122 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 12:09:29.420301 1099122 start.go:495] detecting cgroup driver to use...
	I0127 12:09:29.420332 1099122 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 12:09:29.420386 1099122 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:09:29.441810 1099122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:09:29.468770 1099122 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:09:29.468862 1099122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:09:29.489484 1099122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:09:29.524146 1099122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:09:29.742846 1099122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:09:30.032138 1099122 docker.go:233] disabling docker service ...
	I0127 12:09:30.032219 1099122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:09:30.111880 1099122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:09:30.171407 1099122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:09:30.314282 1099122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:09:30.434053 1099122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:09:30.453712 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:09:30.474448 1099122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0127 12:09:30.487544 1099122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:09:30.497807 1099122 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:09:30.497877 1099122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:09:30.525869 1099122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:09:30.553630 1099122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:09:30.577099 1099122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:09:30.605591 1099122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:09:30.617164 1099122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:09:30.632417 1099122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:09:30.644125 1099122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:09:30.654945 1099122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:09:30.787916 1099122 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:09:31.017179 1099122 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:09:31.017248 1099122 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:09:31.024802 1099122 start.go:563] Will wait 60s for crictl version
	I0127 12:09:31.024869 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:09:31.028937 1099122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:09:31.100051 1099122 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0127 12:09:31.100131 1099122 ssh_runner.go:195] Run: containerd --version
	I0127 12:09:31.146811 1099122 ssh_runner.go:195] Run: containerd --version
	I0127 12:09:31.173670 1099122 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
	I0127 12:09:31.175120 1099122 cli_runner.go:164] Run: docker network inspect old-k8s-version-999803 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:09:31.211498 1099122 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0127 12:09:31.215529 1099122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:09:31.226955 1099122 kubeadm.go:883] updating cluster {Name:old-k8s-version-999803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-999803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:09:31.227069 1099122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 12:09:31.227131 1099122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:09:31.282226 1099122 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:09:31.282246 1099122 containerd.go:534] Images already preloaded, skipping extraction
	I0127 12:09:31.282304 1099122 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:09:31.340833 1099122 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:09:31.340901 1099122 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:09:31.340922 1099122 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0127 12:09:31.341084 1099122 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-999803 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-999803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:09:31.341182 1099122 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:09:31.409559 1099122 cni.go:84] Creating CNI manager for ""
	I0127 12:09:31.409581 1099122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 12:09:31.409596 1099122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:09:31.409618 1099122 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-999803 NodeName:old-k8s-version-999803 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 12:09:31.409742 1099122 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-999803"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:09:31.409807 1099122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 12:09:31.419450 1099122 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:09:31.419571 1099122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:09:31.429011 1099122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0127 12:09:31.448157 1099122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:09:31.467193 1099122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0127 12:09:31.485896 1099122 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 12:09:31.489817 1099122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:09:31.500650 1099122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:09:31.604587 1099122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:09:31.621052 1099122 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803 for IP: 192.168.76.2
	I0127 12:09:31.621120 1099122 certs.go:194] generating shared ca certs ...
	I0127 12:09:31.621149 1099122 certs.go:226] acquiring lock for ca certs: {Name:mke15f79704ae0e83f911aa0e3f9c4b862da9341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:09:31.621347 1099122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-888339/.minikube/ca.key
	I0127 12:09:31.621433 1099122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.key
	I0127 12:09:31.621459 1099122 certs.go:256] generating profile certs ...
	I0127 12:09:31.621604 1099122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.key
	I0127 12:09:31.621712 1099122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/apiserver.key.10aba280
	I0127 12:09:31.621799 1099122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/proxy-client.key
	I0127 12:09:31.622072 1099122 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715.pem (1338 bytes)
	W0127 12:09:31.622128 1099122 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715_empty.pem, impossibly tiny 0 bytes
	I0127 12:09:31.622165 1099122 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:09:31.622215 1099122 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:09:31.622273 1099122 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:09:31.622333 1099122 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem (1675 bytes)
	I0127 12:09:31.622413 1099122 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem (1708 bytes)
	I0127 12:09:31.623141 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:09:31.706726 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:09:31.799931 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:09:31.834885 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:09:31.861207 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 12:09:31.891039 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:09:31.916614 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:09:31.943387 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:09:31.975203 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:09:32.003272 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715.pem --> /usr/share/ca-certificates/893715.pem (1338 bytes)
	I0127 12:09:32.034780 1099122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem --> /usr/share/ca-certificates/8937152.pem (1708 bytes)
	I0127 12:09:32.068572 1099122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:09:32.087431 1099122 ssh_runner.go:195] Run: openssl version
	I0127 12:09:32.094084 1099122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:09:32.103576 1099122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:09:32.107900 1099122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:23 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:09:32.107971 1099122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:09:32.115400 1099122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:09:32.124714 1099122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/893715.pem && ln -fs /usr/share/ca-certificates/893715.pem /etc/ssl/certs/893715.pem"
	I0127 12:09:32.134981 1099122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/893715.pem
	I0127 12:09:32.139173 1099122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:31 /usr/share/ca-certificates/893715.pem
	I0127 12:09:32.139236 1099122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/893715.pem
	I0127 12:09:32.147615 1099122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/893715.pem /etc/ssl/certs/51391683.0"
	I0127 12:09:32.158978 1099122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8937152.pem && ln -fs /usr/share/ca-certificates/8937152.pem /etc/ssl/certs/8937152.pem"
	I0127 12:09:32.170491 1099122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937152.pem
	I0127 12:09:32.174348 1099122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:31 /usr/share/ca-certificates/8937152.pem
	I0127 12:09:32.174479 1099122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937152.pem
	I0127 12:09:32.183063 1099122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8937152.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:09:32.193473 1099122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:09:32.198197 1099122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:09:32.206517 1099122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:09:32.217890 1099122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:09:32.225231 1099122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:09:32.232441 1099122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:09:32.239625 1099122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:09:32.246630 1099122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-999803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-999803 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:09:32.246792 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:09:32.246883 1099122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:09:32.306653 1099122 cri.go:89] found id: "006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:09:32.306729 1099122 cri.go:89] found id: "92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:09:32.306748 1099122 cri.go:89] found id: "ad07ff871a074a1b4a7dabc075bef03722ba87f36572ade75097c7f0336caa7b"
	I0127 12:09:32.306773 1099122 cri.go:89] found id: "69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:09:32.306817 1099122 cri.go:89] found id: "f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:09:32.306874 1099122 cri.go:89] found id: "24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:09:32.306932 1099122 cri.go:89] found id: "8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:09:32.306957 1099122 cri.go:89] found id: "2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:09:32.306975 1099122 cri.go:89] found id: ""
	I0127 12:09:32.307053 1099122 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:09:32.328064 1099122 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:09:32Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:09:32.328191 1099122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:09:32.343347 1099122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:09:32.343418 1099122 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:09:32.343515 1099122 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:09:32.355944 1099122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:09:32.356480 1099122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-999803" does not appear in /home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 12:09:32.356650 1099122 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-888339/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-999803" cluster setting kubeconfig missing "old-k8s-version-999803" context setting]
	I0127 12:09:32.357075 1099122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/kubeconfig: {Name:mk75ddd380b783b9f157e482ffdcc29dbd635876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:09:32.358768 1099122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:09:32.369137 1099122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0127 12:09:32.369218 1099122 kubeadm.go:597] duration metric: took 25.779057ms to restartPrimaryControlPlane
	I0127 12:09:32.369242 1099122 kubeadm.go:394] duration metric: took 122.620377ms to StartCluster
	I0127 12:09:32.369287 1099122 settings.go:142] acquiring lock: {Name:mk8e4620a376eeb900823ad35149c0dd6d301c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:09:32.369378 1099122 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 12:09:32.370062 1099122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/kubeconfig: {Name:mk75ddd380b783b9f157e482ffdcc29dbd635876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:09:32.370310 1099122 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:09:32.370844 1099122 config.go:182] Loaded profile config "old-k8s-version-999803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 12:09:32.370818 1099122 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:09:32.371172 1099122 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-999803"
	I0127 12:09:32.371209 1099122 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-999803"
	W0127 12:09:32.371245 1099122 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:09:32.371290 1099122 host.go:66] Checking if "old-k8s-version-999803" exists ...
	I0127 12:09:32.371886 1099122 cli_runner.go:164] Run: docker container inspect old-k8s-version-999803 --format={{.State.Status}}
	I0127 12:09:32.372063 1099122 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-999803"
	I0127 12:09:32.372113 1099122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-999803"
	I0127 12:09:32.372433 1099122 cli_runner.go:164] Run: docker container inspect old-k8s-version-999803 --format={{.State.Status}}
	I0127 12:09:32.372811 1099122 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-999803"
	I0127 12:09:32.372830 1099122 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-999803"
	W0127 12:09:32.372836 1099122 addons.go:247] addon metrics-server should already be in state true
	I0127 12:09:32.372858 1099122 host.go:66] Checking if "old-k8s-version-999803" exists ...
	I0127 12:09:32.373465 1099122 cli_runner.go:164] Run: docker container inspect old-k8s-version-999803 --format={{.State.Status}}
	I0127 12:09:32.373766 1099122 addons.go:69] Setting dashboard=true in profile "old-k8s-version-999803"
	I0127 12:09:32.373803 1099122 addons.go:238] Setting addon dashboard=true in "old-k8s-version-999803"
	W0127 12:09:32.373838 1099122 addons.go:247] addon dashboard should already be in state true
	I0127 12:09:32.373882 1099122 host.go:66] Checking if "old-k8s-version-999803" exists ...
	I0127 12:09:32.374389 1099122 cli_runner.go:164] Run: docker container inspect old-k8s-version-999803 --format={{.State.Status}}
	I0127 12:09:32.376460 1099122 out.go:177] * Verifying Kubernetes components...
	I0127 12:09:32.384448 1099122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:09:32.426417 1099122 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-999803"
	W0127 12:09:32.426441 1099122 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:09:32.426466 1099122 host.go:66] Checking if "old-k8s-version-999803" exists ...
	I0127 12:09:32.426879 1099122 cli_runner.go:164] Run: docker container inspect old-k8s-version-999803 --format={{.State.Status}}
	I0127 12:09:32.457567 1099122 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:09:32.458953 1099122 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:09:32.461015 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:09:32.461134 1099122 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:09:32.461217 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:32.466895 1099122 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:09:32.466957 1099122 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:09:32.468136 1099122 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:09:32.468155 1099122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:09:32.468288 1099122 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:09:32.468296 1099122 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:09:32.468580 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:32.469050 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:32.510920 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:32.521468 1099122 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:09:32.521489 1099122 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:09:32.521551 1099122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-999803
	I0127 12:09:32.572078 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:32.584656 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:32.594969 1099122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/old-k8s-version-999803/id_rsa Username:docker}
	I0127 12:09:32.613717 1099122 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:09:32.662584 1099122 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-999803" to be "Ready" ...
	I0127 12:09:32.709374 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:09:32.709397 1099122 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:09:32.763316 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:09:32.763414 1099122 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:09:32.769985 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:09:32.788867 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:09:32.811845 1099122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:09:32.811871 1099122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:09:32.860268 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:09:32.860293 1099122 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:09:32.861629 1099122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:09:32.861648 1099122 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:09:32.922819 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:09:32.922844 1099122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:09:32.931379 1099122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:09:32.931405 1099122 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:09:33.009749 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:09:33.009777 1099122 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 12:09:33.028136 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:09:33.072011 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:09:33.072040 1099122 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0127 12:09:33.092674 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.092709 1099122 retry.go:31] will retry after 215.749644ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:33.092747 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.092760 1099122 retry.go:31] will retry after 342.392591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.129246 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:09:33.129274 1099122 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:09:33.204763 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:09:33.204788 1099122 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0127 12:09:33.230805 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.230839 1099122 retry.go:31] will retry after 324.502248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.242298 1099122 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:09:33.242327 1099122 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:09:33.260523 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:09:33.309108 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 12:09:33.362004 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.362038 1099122 retry.go:31] will retry after 248.439965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.436221 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 12:09:33.441825 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.441860 1099122 retry.go:31] will retry after 338.007574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:33.535294 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.535329 1099122 retry.go:31] will retry after 200.606528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.555554 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:09:33.610898 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 12:09:33.653675 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.653760 1099122 retry.go:31] will retry after 525.967005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:33.733542 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.733576 1099122 retry.go:31] will retry after 423.596219ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.736824 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:09:33.779998 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 12:09:33.841591 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.841621 1099122 retry.go:31] will retry after 364.299343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:33.927860 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:33.927896 1099122 retry.go:31] will retry after 661.414616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:34.158342 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:09:34.180175 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:09:34.207000 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 12:09:34.280344 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:34.280385 1099122 retry.go:31] will retry after 722.451237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:34.457040 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:34.457076 1099122 retry.go:31] will retry after 657.542567ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:34.461158 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:34.461198 1099122 retry.go:31] will retry after 854.797077ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:34.589545 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:09:34.663242 1099122 node_ready.go:53] error getting node "old-k8s-version-999803": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-999803": dial tcp 192.168.76.2:8443: connect: connection refused
	W0127 12:09:34.678391 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:34.678431 1099122 retry.go:31] will retry after 732.310994ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.004235 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:09:35.115552 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 12:09:35.120846 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.120930 1099122 retry.go:31] will retry after 725.801172ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:35.216756 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.216839 1099122 retry.go:31] will retry after 1.144933552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.317064 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:09:35.411423 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 12:09:35.421626 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.421711 1099122 retry.go:31] will retry after 1.353882074s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0127 12:09:35.509535 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.509620 1099122 retry.go:31] will retry after 1.490857259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.847537 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 12:09:35.952169 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:35.952211 1099122 retry.go:31] will retry after 1.510852016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:36.362267 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 12:09:36.478170 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:36.478255 1099122 retry.go:31] will retry after 741.738408ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:36.664100 1099122 node_ready.go:53] error getting node "old-k8s-version-999803": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-999803": dial tcp 192.168.76.2:8443: connect: connection refused
	I0127 12:09:36.776557 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 12:09:36.894820 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:36.894858 1099122 retry.go:31] will retry after 2.217654455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:37.001180 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 12:09:37.148389 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:37.148419 1099122 retry.go:31] will retry after 1.456107325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:37.220218 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 12:09:37.321405 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:37.321446 1099122 retry.go:31] will retry after 2.422198697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:37.463841 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 12:09:37.566522 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:37.566561 1099122 retry.go:31] will retry after 1.4666102s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:38.605130 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0127 12:09:38.714899 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:38.714938 1099122 retry.go:31] will retry after 3.82266818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:39.033394 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:09:39.112893 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 12:09:39.153075 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:39.153113 1099122 retry.go:31] will retry after 1.892643033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:39.163692 1099122 node_ready.go:53] error getting node "old-k8s-version-999803": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-999803": dial tcp 192.168.76.2:8443: connect: connection refused
	W0127 12:09:39.243540 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:39.243574 1099122 retry.go:31] will retry after 3.934943475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:39.744525 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 12:09:39.849927 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:39.849961 1099122 retry.go:31] will retry after 3.112229576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:41.046747 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0127 12:09:41.295684 1099122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:41.295721 1099122 retry.go:31] will retry after 5.587518335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 12:09:42.537834 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:09:42.963287 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:09:43.179108 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:09:46.885295 1099122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:09:51.665223 1099122 node_ready.go:53] error getting node "old-k8s-version-999803": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-999803": net/http: TLS handshake timeout
	I0127 12:09:53.290513 1099122 node_ready.go:49] node "old-k8s-version-999803" has status "Ready":"True"
	I0127 12:09:53.290545 1099122 node_ready.go:38] duration metric: took 20.627877462s for node "old-k8s-version-999803" to be "Ready" ...
	I0127 12:09:53.290557 1099122 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:09:53.613480 1099122 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-8pc5m" in "kube-system" namespace to be "Ready" ...
	I0127 12:09:53.683299 1099122 pod_ready.go:93] pod "coredns-74ff55c5b-8pc5m" in "kube-system" namespace has status "Ready":"True"
	I0127 12:09:53.683339 1099122 pod_ready.go:82] duration metric: took 69.822535ms for pod "coredns-74ff55c5b-8pc5m" in "kube-system" namespace to be "Ready" ...
	I0127 12:09:53.683353 1099122 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:09:53.726483 1099122 pod_ready.go:93] pod "etcd-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"True"
	I0127 12:09:53.726515 1099122 pod_ready.go:82] duration metric: took 43.154155ms for pod "etcd-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:09:53.726531 1099122 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:09:54.510253 1099122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.972361714s)
	I0127 12:09:55.431146 1099122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.467809108s)
	I0127 12:09:55.431337 1099122 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-999803"
	I0127 12:09:55.431261 1099122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.252120921s)
	I0127 12:09:55.547233 1099122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.661886894s)
	I0127 12:09:55.550367 1099122 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-999803 addons enable metrics-server
	
	I0127 12:09:55.553376 1099122 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0127 12:09:55.556384 1099122 addons.go:514] duration metric: took 23.185560997s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0127 12:09:55.756379 1099122 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:09:58.232516 1099122 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:00.303835 1099122 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:02.732144 1099122 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:03.733402 1099122 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"True"
	I0127 12:10:03.733430 1099122 pod_ready.go:82] duration metric: took 10.006890665s for pod "kube-apiserver-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:03.733443 1099122 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:05.740555 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:07.741384 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:10.239805 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:12.240144 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:14.240905 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:16.742205 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:18.742415 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:21.249112 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:23.740749 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:25.741758 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:27.758055 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:30.239728 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:32.239988 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:34.740785 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:36.741352 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:39.240630 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:41.740112 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:43.741550 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:45.741618 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:47.745656 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:50.240430 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:52.240688 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:54.741996 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:57.239927 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:59.240275 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:01.241298 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:03.740357 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:06.239980 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:08.741190 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:10.741646 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:11.741804 1099122 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"True"
	I0127 12:11:11.741830 1099122 pod_ready.go:82] duration metric: took 1m8.008379088s for pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:11.741843 1099122 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nt2l9" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:11.747091 1099122 pod_ready.go:93] pod "kube-proxy-nt2l9" in "kube-system" namespace has status "Ready":"True"
	I0127 12:11:11.747117 1099122 pod_ready.go:82] duration metric: took 5.26649ms for pod "kube-proxy-nt2l9" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:11.747129 1099122 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:13.753603 1099122 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"True"
	I0127 12:11:13.753625 1099122 pod_ready.go:82] duration metric: took 2.006488609s for pod "kube-scheduler-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:13.753636 1099122 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:15.760059 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:17.760104 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:20.259362 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:22.259503 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:24.265513 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:26.759839 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:29.259521 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:31.759761 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:33.760790 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:36.259938 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:38.260402 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:40.260435 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:42.261372 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:44.759807 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:46.760139 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:49.259135 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:51.260408 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:53.761606 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:56.262295 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:58.760329 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:00.760445 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:03.259143 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:05.260247 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:07.759197 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:09.760214 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:11.760909 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:14.259817 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:16.260506 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:18.766515 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:21.259066 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:23.259646 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:25.259764 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:27.259886 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:29.758956 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:31.765446 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:34.259539 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:36.259690 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:38.259810 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:40.265756 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:42.759695 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:44.760001 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:47.264085 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:49.759794 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:51.760814 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:54.260573 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:56.261649 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:58.760562 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:01.260514 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:03.760846 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:06.259692 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:08.259809 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:10.264009 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:12.760961 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:15.259543 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:17.259672 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:19.266685 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:21.760466 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:23.760739 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:26.260232 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:28.759672 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:30.759737 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:32.760624 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:35.260200 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:37.760079 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:39.760771 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:42.261258 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:44.760471 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:47.260741 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:49.760653 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:51.761150 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:54.260064 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:56.260168 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:58.759553 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:00.760997 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:03.259192 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:05.260842 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:07.759902 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:09.760696 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:11.761676 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:14.259176 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:16.260538 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:18.760162 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:20.761412 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:23.259212 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:25.259378 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:27.260207 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:29.761136 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:32.259719 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:34.260064 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:36.760335 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:38.760390 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:41.260552 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:43.260596 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:45.305357 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:47.761135 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:49.762322 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:52.262267 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:54.760292 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:57.260208 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:59.760442 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:01.760706 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:04.260789 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:06.759923 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:08.761236 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:10.761497 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:13.259288 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:13.754136 1099122 pod_ready.go:82] duration metric: took 4m0.000481802s for pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace to be "Ready" ...
	E0127 12:15:13.754168 1099122 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 12:15:13.754179 1099122 pod_ready.go:39] duration metric: took 5m20.463611828s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:15:13.754196 1099122 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:15:13.754230 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:15:13.754302 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:15:13.794220 1099122 cri.go:89] found id: "709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:13.794243 1099122 cri.go:89] found id: "f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:13.794248 1099122 cri.go:89] found id: ""
	I0127 12:15:13.794255 1099122 logs.go:282] 2 containers: [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e]
	I0127 12:15:13.794338 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.797981 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.801452 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 12:15:13.801523 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:15:13.839152 1099122 cri.go:89] found id: "8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:13.839179 1099122 cri.go:89] found id: "2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:13.839185 1099122 cri.go:89] found id: ""
	I0127 12:15:13.839192 1099122 logs.go:282] 2 containers: [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1]
	I0127 12:15:13.839249 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.842927 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.846323 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 12:15:13.846397 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:15:13.884753 1099122 cri.go:89] found id: "66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:13.884776 1099122 cri.go:89] found id: "006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:13.884781 1099122 cri.go:89] found id: ""
	I0127 12:15:13.884787 1099122 logs.go:282] 2 containers: [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863]
	I0127 12:15:13.884849 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.888585 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.892544 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:15:13.892620 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:15:13.935201 1099122 cri.go:89] found id: "15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:13.935265 1099122 cri.go:89] found id: "8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:13.935275 1099122 cri.go:89] found id: ""
	I0127 12:15:13.935282 1099122 logs.go:282] 2 containers: [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b]
	I0127 12:15:13.935348 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.938912 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.942321 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:15:13.942420 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:15:13.982013 1099122 cri.go:89] found id: "244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:13.982037 1099122 cri.go:89] found id: "69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:13.982042 1099122 cri.go:89] found id: ""
	I0127 12:15:13.982049 1099122 logs.go:282] 2 containers: [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d]
	I0127 12:15:13.982107 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.985808 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.989196 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:15:13.989297 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:15:14.034072 1099122 cri.go:89] found id: "35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:14.034096 1099122 cri.go:89] found id: "24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:14.034103 1099122 cri.go:89] found id: ""
	I0127 12:15:14.034110 1099122 logs.go:282] 2 containers: [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84]
	I0127 12:15:14.034175 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.038229 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.041981 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 12:15:14.042087 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:15:14.096637 1099122 cri.go:89] found id: "8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:14.096662 1099122 cri.go:89] found id: "92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:14.096667 1099122 cri.go:89] found id: ""
	I0127 12:15:14.096674 1099122 logs.go:282] 2 containers: [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f]
	I0127 12:15:14.096735 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.100700 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.104367 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:15:14.104440 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:15:14.142295 1099122 cri.go:89] found id: "d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:14.142318 1099122 cri.go:89] found id: ""
	I0127 12:15:14.142341 1099122 logs.go:282] 1 containers: [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6]
	I0127 12:15:14.142395 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.145773 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 12:15:14.145852 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 12:15:14.184211 1099122 cri.go:89] found id: "1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:14.184233 1099122 cri.go:89] found id: "072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:14.184239 1099122 cri.go:89] found id: ""
	I0127 12:15:14.184246 1099122 logs.go:282] 2 containers: [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4]
	I0127 12:15:14.184300 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.187804 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.191031 1099122 logs.go:123] Gathering logs for dmesg ...
	I0127 12:15:14.191103 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:15:14.210127 1099122 logs.go:123] Gathering logs for etcd [2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1] ...
	I0127 12:15:14.210159 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:14.250486 1099122 logs.go:123] Gathering logs for kube-controller-manager [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d] ...
	I0127 12:15:14.250517 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:14.310627 1099122 logs.go:123] Gathering logs for kubernetes-dashboard [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6] ...
	I0127 12:15:14.310663 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:14.357158 1099122 logs.go:123] Gathering logs for kubelet ...
	I0127 12:15:14.357187 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:15:14.421142 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.065326     663 reflector.go:138] object-"kube-system"/"metrics-server-token-827qh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-827qh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.421398 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.068245     663 reflector.go:138] object-"kube-system"/"kindnet-token-jxc27": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jxc27" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.421622 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.072508     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-pzcvk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-pzcvk" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.421857 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.092394     663 reflector.go:138] object-"kube-system"/"coredns-token-m2lsh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2lsh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422066 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.102861     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422281 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.106409     663 reflector.go:138] object-"default"/"default-token-pmbfm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pmbfm" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422509 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135445     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bbnlz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bbnlz" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422714 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135737     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.431944 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:55 old-k8s-version-999803 kubelet[663]: E0127 12:09:55.695476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.432135 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:56 old-k8s-version-999803 kubelet[663]: E0127 12:09:56.587407     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.434973 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:08 old-k8s-version-999803 kubelet[663]: E0127 12:10:08.310539     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.436943 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:18 old-k8s-version-999803 kubelet[663]: E0127 12:10:18.685285     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.437410 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:19 old-k8s-version-999803 kubelet[663]: E0127 12:10:19.697364     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.437932 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:23 old-k8s-version-999803 kubelet[663]: E0127 12:10:23.289410     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.438369 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:27 old-k8s-version-999803 kubelet[663]: E0127 12:10:27.725508     663 pod_workers.go:191] Error syncing pod f73574be-9aec-4a33-ac88-97d900488a22 ("storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"
	W0127 12:15:14.439000 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:29 old-k8s-version-999803 kubelet[663]: E0127 12:10:29.743840     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.441866 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:35 old-k8s-version-999803 kubelet[663]: E0127 12:10:35.298365     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.442236 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:38 old-k8s-version-999803 kubelet[663]: E0127 12:10:38.666640     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.442554 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:46 old-k8s-version-999803 kubelet[663]: E0127 12:10:46.289557     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.443148 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:53 old-k8s-version-999803 kubelet[663]: E0127 12:10:53.813459     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.443334 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:57 old-k8s-version-999803 kubelet[663]: E0127 12:10:57.289244     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.443693 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:58 old-k8s-version-999803 kubelet[663]: E0127 12:10:58.662563     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.443879 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.289633     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.444208 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.290664     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.446683 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:21 old-k8s-version-999803 kubelet[663]: E0127 12:11:21.295133     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.447018 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:25 old-k8s-version-999803 kubelet[663]: E0127 12:11:25.289132     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.447202 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:36 old-k8s-version-999803 kubelet[663]: E0127 12:11:36.291476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.447862 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:40 old-k8s-version-999803 kubelet[663]: E0127 12:11:40.940519     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.448197 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:48 old-k8s-version-999803 kubelet[663]: E0127 12:11:48.662783     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.448380 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:50 old-k8s-version-999803 kubelet[663]: E0127 12:11:50.289726     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.448722 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.288851     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.448905 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.289979     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.449125 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:14 old-k8s-version-999803 kubelet[663]: E0127 12:12:14.295027     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.449483 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:18 old-k8s-version-999803 kubelet[663]: E0127 12:12:18.288843     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.449672 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:26 old-k8s-version-999803 kubelet[663]: E0127 12:12:26.289650     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.450001 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:30 old-k8s-version-999803 kubelet[663]: E0127 12:12:30.289329     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.450184 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:37 old-k8s-version-999803 kubelet[663]: E0127 12:12:37.289235     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.450509 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:44 old-k8s-version-999803 kubelet[663]: E0127 12:12:44.288931     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.452938 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:50 old-k8s-version-999803 kubelet[663]: E0127 12:12:50.299810     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.453274 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:57 old-k8s-version-999803 kubelet[663]: E0127 12:12:57.288813     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.453460 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:03 old-k8s-version-999803 kubelet[663]: E0127 12:13:03.289391     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.454055 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:11 old-k8s-version-999803 kubelet[663]: E0127 12:13:11.199586     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.454245 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:14 old-k8s-version-999803 kubelet[663]: E0127 12:13:14.289503     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.454570 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:18 old-k8s-version-999803 kubelet[663]: E0127 12:13:18.662572     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.454756 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:29 old-k8s-version-999803 kubelet[663]: E0127 12:13:29.289301     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.455083 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:31 old-k8s-version-999803 kubelet[663]: E0127 12:13:31.288795     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.455286 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.289460     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.455615 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.290963     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.455808 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:53 old-k8s-version-999803 kubelet[663]: E0127 12:13:53.289152     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.456133 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:57 old-k8s-version-999803 kubelet[663]: E0127 12:13:57.288829     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.456320 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:07 old-k8s-version-999803 kubelet[663]: E0127 12:14:07.289177     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.456652 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:12 old-k8s-version-999803 kubelet[663]: E0127 12:14:12.289332     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.456835 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:21 old-k8s-version-999803 kubelet[663]: E0127 12:14:21.289169     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.457166 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:26 old-k8s-version-999803 kubelet[663]: E0127 12:14:26.288817     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.457351 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:32 old-k8s-version-999803 kubelet[663]: E0127 12:14:32.289382     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.457676 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:37 old-k8s-version-999803 kubelet[663]: E0127 12:14:37.288718     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.457860 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.458184 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.458369 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.458694 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.458877 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:14.458888 1099122 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:15:14.458906 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:15:14.600554 1099122 logs.go:123] Gathering logs for coredns [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab] ...
	I0127 12:15:14.600589 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:14.642894 1099122 logs.go:123] Gathering logs for kube-controller-manager [24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84] ...
	I0127 12:15:14.642924 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:14.708414 1099122 logs.go:123] Gathering logs for etcd [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9] ...
	I0127 12:15:14.708446 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:14.750297 1099122 logs.go:123] Gathering logs for coredns [006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863] ...
	I0127 12:15:14.750327 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:14.787004 1099122 logs.go:123] Gathering logs for kube-scheduler [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47] ...
	I0127 12:15:14.787031 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:14.828360 1099122 logs.go:123] Gathering logs for kube-proxy [69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d] ...
	I0127 12:15:14.828389 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:14.873162 1099122 logs.go:123] Gathering logs for kindnet [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026] ...
	I0127 12:15:14.873189 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:14.921463 1099122 logs.go:123] Gathering logs for kindnet [92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f] ...
	I0127 12:15:14.921495 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:14.974595 1099122 logs.go:123] Gathering logs for storage-provisioner [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609] ...
	I0127 12:15:14.974623 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:15.034157 1099122 logs.go:123] Gathering logs for storage-provisioner [072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4] ...
	I0127 12:15:15.034189 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:15.077801 1099122 logs.go:123] Gathering logs for kube-apiserver [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f] ...
	I0127 12:15:15.077829 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:15.150398 1099122 logs.go:123] Gathering logs for kube-apiserver [f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e] ...
	I0127 12:15:15.150475 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:15.219967 1099122 logs.go:123] Gathering logs for kube-scheduler [8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b] ...
	I0127 12:15:15.220021 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:15.267003 1099122 logs.go:123] Gathering logs for kube-proxy [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56] ...
	I0127 12:15:15.267075 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:15.305261 1099122 logs.go:123] Gathering logs for containerd ...
	I0127 12:15:15.305350 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 12:15:15.379123 1099122 logs.go:123] Gathering logs for container status ...
	I0127 12:15:15.379163 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:15:15.437988 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:15.438018 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 12:15:15.438072 1099122 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0127 12:15:15.438088 1099122 out.go:270]   Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:15.438101 1099122 out.go:270]   Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	  Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:15.438126 1099122 out.go:270]   Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:15.438132 1099122 out.go:270]   Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	  Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:15.438137 1099122 out.go:270]   Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:15.438142 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:15.438151 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:15:25.440470 1099122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:15:25.452605 1099122 api_server.go:72] duration metric: took 5m53.082233775s to wait for apiserver process to appear ...
	I0127 12:15:25.452629 1099122 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:15:25.452667 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:15:25.452724 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:15:25.494090 1099122 cri.go:89] found id: "709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:25.494115 1099122 cri.go:89] found id: "f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:25.494121 1099122 cri.go:89] found id: ""
	I0127 12:15:25.494128 1099122 logs.go:282] 2 containers: [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e]
	I0127 12:15:25.494189 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.497645 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.500895 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 12:15:25.500968 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:15:25.542357 1099122 cri.go:89] found id: "8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:25.542442 1099122 cri.go:89] found id: "2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:25.542451 1099122 cri.go:89] found id: ""
	I0127 12:15:25.542460 1099122 logs.go:282] 2 containers: [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1]
	I0127 12:15:25.542525 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.548254 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.552119 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 12:15:25.552193 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:15:25.627449 1099122 cri.go:89] found id: "66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:25.627471 1099122 cri.go:89] found id: "006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:25.627476 1099122 cri.go:89] found id: ""
	I0127 12:15:25.627484 1099122 logs.go:282] 2 containers: [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863]
	I0127 12:15:25.627539 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.631955 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.635615 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:15:25.635695 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:15:25.686029 1099122 cri.go:89] found id: "15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:25.686052 1099122 cri.go:89] found id: "8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:25.686057 1099122 cri.go:89] found id: ""
	I0127 12:15:25.686063 1099122 logs.go:282] 2 containers: [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b]
	I0127 12:15:25.686121 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.691005 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.696361 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:15:25.696439 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:15:25.735204 1099122 cri.go:89] found id: "244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:25.735228 1099122 cri.go:89] found id: "69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:25.735233 1099122 cri.go:89] found id: ""
	I0127 12:15:25.735246 1099122 logs.go:282] 2 containers: [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d]
	I0127 12:15:25.735318 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.738739 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.742012 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:15:25.742080 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:15:25.783705 1099122 cri.go:89] found id: "35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:25.783728 1099122 cri.go:89] found id: "24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:25.783733 1099122 cri.go:89] found id: ""
	I0127 12:15:25.783740 1099122 logs.go:282] 2 containers: [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84]
	I0127 12:15:25.783798 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.787402 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.790806 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 12:15:25.790883 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:15:25.830016 1099122 cri.go:89] found id: "8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:25.830038 1099122 cri.go:89] found id: "92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:25.830043 1099122 cri.go:89] found id: ""
	I0127 12:15:25.830050 1099122 logs.go:282] 2 containers: [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f]
	I0127 12:15:25.830108 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.834070 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.837688 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 12:15:25.837767 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 12:15:25.880360 1099122 cri.go:89] found id: "1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:25.880381 1099122 cri.go:89] found id: "072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:25.880386 1099122 cri.go:89] found id: ""
	I0127 12:15:25.880394 1099122 logs.go:282] 2 containers: [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4]
	I0127 12:15:25.880459 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.884176 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.888084 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:15:25.888159 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:15:25.929160 1099122 cri.go:89] found id: "d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:25.929185 1099122 cri.go:89] found id: ""
	I0127 12:15:25.929193 1099122 logs.go:282] 1 containers: [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6]
	I0127 12:15:25.929257 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.934269 1099122 logs.go:123] Gathering logs for kindnet [92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f] ...
	I0127 12:15:25.934297 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:25.978515 1099122 logs.go:123] Gathering logs for kube-scheduler [8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b] ...
	I0127 12:15:25.978546 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:26.023365 1099122 logs.go:123] Gathering logs for kube-proxy [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56] ...
	I0127 12:15:26.023397 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:26.074845 1099122 logs.go:123] Gathering logs for kube-proxy [69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d] ...
	I0127 12:15:26.074873 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:26.126098 1099122 logs.go:123] Gathering logs for container status ...
	I0127 12:15:26.126177 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:15:26.170530 1099122 logs.go:123] Gathering logs for dmesg ...
	I0127 12:15:26.170571 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:15:26.190451 1099122 logs.go:123] Gathering logs for coredns [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab] ...
	I0127 12:15:26.190479 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:26.240411 1099122 logs.go:123] Gathering logs for storage-provisioner [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609] ...
	I0127 12:15:26.240439 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:26.285457 1099122 logs.go:123] Gathering logs for kindnet [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026] ...
	I0127 12:15:26.285491 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:26.332919 1099122 logs.go:123] Gathering logs for containerd ...
	I0127 12:15:26.332949 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 12:15:26.397666 1099122 logs.go:123] Gathering logs for kube-apiserver [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f] ...
	I0127 12:15:26.397707 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:26.459604 1099122 logs.go:123] Gathering logs for etcd [2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1] ...
	I0127 12:15:26.459638 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:26.511966 1099122 logs.go:123] Gathering logs for kube-scheduler [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47] ...
	I0127 12:15:26.512123 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:26.553498 1099122 logs.go:123] Gathering logs for etcd [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9] ...
	I0127 12:15:26.553568 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:26.619235 1099122 logs.go:123] Gathering logs for coredns [006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863] ...
	I0127 12:15:26.619267 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:26.662492 1099122 logs.go:123] Gathering logs for kube-controller-manager [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d] ...
	I0127 12:15:26.662523 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:26.717270 1099122 logs.go:123] Gathering logs for kube-controller-manager [24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84] ...
	I0127 12:15:26.717303 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:26.789783 1099122 logs.go:123] Gathering logs for storage-provisioner [072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4] ...
	I0127 12:15:26.789827 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:26.841274 1099122 logs.go:123] Gathering logs for kubelet ...
	I0127 12:15:26.841302 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:15:26.899918 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.065326     663 reflector.go:138] object-"kube-system"/"metrics-server-token-827qh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-827qh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900177 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.068245     663 reflector.go:138] object-"kube-system"/"kindnet-token-jxc27": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jxc27" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900398 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.072508     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-pzcvk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-pzcvk" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900685 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.092394     663 reflector.go:138] object-"kube-system"/"coredns-token-m2lsh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2lsh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900893 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.102861     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.901112 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.106409     663 reflector.go:138] object-"default"/"default-token-pmbfm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pmbfm" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.901342 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135445     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bbnlz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bbnlz" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.901547 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135737     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.910804 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:55 old-k8s-version-999803 kubelet[663]: E0127 12:09:55.695476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.910996 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:56 old-k8s-version-999803 kubelet[663]: E0127 12:09:56.587407     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.913762 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:08 old-k8s-version-999803 kubelet[663]: E0127 12:10:08.310539     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.915693 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:18 old-k8s-version-999803 kubelet[663]: E0127 12:10:18.685285     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.916152 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:19 old-k8s-version-999803 kubelet[663]: E0127 12:10:19.697364     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.916671 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:23 old-k8s-version-999803 kubelet[663]: E0127 12:10:23.289410     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.917136 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:27 old-k8s-version-999803 kubelet[663]: E0127 12:10:27.725508     663 pod_workers.go:191] Error syncing pod f73574be-9aec-4a33-ac88-97d900488a22 ("storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"
	W0127 12:15:26.917721 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:29 old-k8s-version-999803 kubelet[663]: E0127 12:10:29.743840     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.920519 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:35 old-k8s-version-999803 kubelet[663]: E0127 12:10:35.298365     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.920848 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:38 old-k8s-version-999803 kubelet[663]: E0127 12:10:38.666640     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.921170 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:46 old-k8s-version-999803 kubelet[663]: E0127 12:10:46.289557     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.921758 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:53 old-k8s-version-999803 kubelet[663]: E0127 12:10:53.813459     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.921942 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:57 old-k8s-version-999803 kubelet[663]: E0127 12:10:57.289244     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.922272 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:58 old-k8s-version-999803 kubelet[663]: E0127 12:10:58.662563     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.922455 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.289633     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.922880 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.290664     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.925368 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:21 old-k8s-version-999803 kubelet[663]: E0127 12:11:21.295133     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.925709 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:25 old-k8s-version-999803 kubelet[663]: E0127 12:11:25.289132     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.925894 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:36 old-k8s-version-999803 kubelet[663]: E0127 12:11:36.291476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.926506 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:40 old-k8s-version-999803 kubelet[663]: E0127 12:11:40.940519     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.926833 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:48 old-k8s-version-999803 kubelet[663]: E0127 12:11:48.662783     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.927017 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:50 old-k8s-version-999803 kubelet[663]: E0127 12:11:50.289726     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.927348 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.288851     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.927543 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.289979     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.927729 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:14 old-k8s-version-999803 kubelet[663]: E0127 12:12:14.295027     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.928059 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:18 old-k8s-version-999803 kubelet[663]: E0127 12:12:18.288843     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.928244 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:26 old-k8s-version-999803 kubelet[663]: E0127 12:12:26.289650     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.928570 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:30 old-k8s-version-999803 kubelet[663]: E0127 12:12:30.289329     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.928753 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:37 old-k8s-version-999803 kubelet[663]: E0127 12:12:37.289235     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.929138 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:44 old-k8s-version-999803 kubelet[663]: E0127 12:12:44.288931     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.931588 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:50 old-k8s-version-999803 kubelet[663]: E0127 12:12:50.299810     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.931921 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:57 old-k8s-version-999803 kubelet[663]: E0127 12:12:57.288813     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.932106 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:03 old-k8s-version-999803 kubelet[663]: E0127 12:13:03.289391     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.932704 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:11 old-k8s-version-999803 kubelet[663]: E0127 12:13:11.199586     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.932890 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:14 old-k8s-version-999803 kubelet[663]: E0127 12:13:14.289503     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.933222 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:18 old-k8s-version-999803 kubelet[663]: E0127 12:13:18.662572     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.933406 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:29 old-k8s-version-999803 kubelet[663]: E0127 12:13:29.289301     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.933735 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:31 old-k8s-version-999803 kubelet[663]: E0127 12:13:31.288795     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.933921 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.289460     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.934248 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.290963     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.934432 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:53 old-k8s-version-999803 kubelet[663]: E0127 12:13:53.289152     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.934759 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:57 old-k8s-version-999803 kubelet[663]: E0127 12:13:57.288829     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.934942 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:07 old-k8s-version-999803 kubelet[663]: E0127 12:14:07.289177     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.935267 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:12 old-k8s-version-999803 kubelet[663]: E0127 12:14:12.289332     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.935452 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:21 old-k8s-version-999803 kubelet[663]: E0127 12:14:21.289169     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.935784 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:26 old-k8s-version-999803 kubelet[663]: E0127 12:14:26.288817     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.935967 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:32 old-k8s-version-999803 kubelet[663]: E0127 12:14:32.289382     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.936292 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:37 old-k8s-version-999803 kubelet[663]: E0127 12:14:37.288718     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.936477 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.936802 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.936988 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.937320 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.937506 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.937836 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:16 old-k8s-version-999803 kubelet[663]: E0127 12:15:16.289567     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.938019 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:24 old-k8s-version-999803 kubelet[663]: E0127 12:15:24.289280     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:26.938029 1099122 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:15:26.938044 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:15:27.111830 1099122 logs.go:123] Gathering logs for kube-apiserver [f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e] ...
	I0127 12:15:27.111864 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:27.165850 1099122 logs.go:123] Gathering logs for kubernetes-dashboard [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6] ...
	I0127 12:15:27.165886 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:27.211532 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:27.211556 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 12:15:27.211643 1099122 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0127 12:15:27.211657 1099122 out.go:270]   Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:27.211679 1099122 out.go:270]   Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	  Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:27.211711 1099122 out.go:270]   Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:27.211720 1099122 out.go:270]   Jan 27 12:15:16 old-k8s-version-999803 kubelet[663]: E0127 12:15:16.289567     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	  Jan 27 12:15:16 old-k8s-version-999803 kubelet[663]: E0127 12:15:16.289567     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:27.211727 1099122 out.go:270]   Jan 27 12:15:24 old-k8s-version-999803 kubelet[663]: E0127 12:15:24.289280     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jan 27 12:15:24 old-k8s-version-999803 kubelet[663]: E0127 12:15:24.289280     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:27.211737 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:27.211744 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:15:37.213669 1099122 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 12:15:37.226947 1099122 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 12:15:37.229972 1099122 out.go:201] 
	W0127 12:15:37.232459 1099122 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0127 12:15:37.232495 1099122 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0127 12:15:37.232520 1099122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0127 12:15:37.232529 1099122 out.go:270] * 
	* 
	W0127 12:15:37.233525 1099122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 12:15:37.237108 1099122 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-999803 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-999803
helpers_test.go:235: (dbg) docker inspect old-k8s-version-999803:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "677d3676ac1fbe8f2697346fc0eac2d7484b857cb067ddc8cb41a960e04edf15",
	        "Created": "2025-01-27T12:06:42.163950524Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1099337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-01-27T12:09:23.726651729Z",
	            "FinishedAt": "2025-01-27T12:09:22.635251532Z"
	        },
	        "Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
	        "ResolvConfPath": "/var/lib/docker/containers/677d3676ac1fbe8f2697346fc0eac2d7484b857cb067ddc8cb41a960e04edf15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/677d3676ac1fbe8f2697346fc0eac2d7484b857cb067ddc8cb41a960e04edf15/hostname",
	        "HostsPath": "/var/lib/docker/containers/677d3676ac1fbe8f2697346fc0eac2d7484b857cb067ddc8cb41a960e04edf15/hosts",
	        "LogPath": "/var/lib/docker/containers/677d3676ac1fbe8f2697346fc0eac2d7484b857cb067ddc8cb41a960e04edf15/677d3676ac1fbe8f2697346fc0eac2d7484b857cb067ddc8cb41a960e04edf15-json.log",
	        "Name": "/old-k8s-version-999803",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-999803:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-999803",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0fcf769388dfba01188e437d9f3a77650f13f130de7eb4c70f4f60472407768-init/diff:/var/lib/docker/overlay2/027cb12703497bfe682a04123361dc92cd40ae4c78d3ee9eafeedefee7ad1bd7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0fcf769388dfba01188e437d9f3a77650f13f130de7eb4c70f4f60472407768/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0fcf769388dfba01188e437d9f3a77650f13f130de7eb4c70f4f60472407768/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0fcf769388dfba01188e437d9f3a77650f13f130de7eb4c70f4f60472407768/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-999803",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-999803/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-999803",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-999803",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-999803",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f4d1a9bef0aedd6cd51c57ae0d895ec6b9107e1526207734103f13b24ede6ea",
	            "SandboxKey": "/var/run/docker/netns/5f4d1a9bef0a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-999803": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b4e46dc8ac51fed9012b81352b7879f1f93fd5c5934f073610fd15fd0544b6d6",
	                    "EndpointID": "223d6f73923c0ac688cc4d02596f5afe549c03199e4350abfc72029a3a44ada7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-999803",
	                        "677d3676ac1f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-999803 -n old-k8s-version-999803
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-999803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-999803 logs -n 25: (2.138170418s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-106238 sudo                                  | cilium-106238            | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC |                     |
	|         | containerd config dump                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-106238 sudo                                  | cilium-106238            | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC |                     |
	|         | systemctl status crio --all                            |                          |         |         |                     |                     |
	|         | --full --no-pager                                      |                          |         |         |                     |                     |
	| ssh     | -p cilium-106238 sudo                                  | cilium-106238            | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-106238 sudo find                             | cilium-106238            | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                          |         |         |                     |                     |
	| ssh     | -p cilium-106238 sudo crio                             | cilium-106238            | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC |                     |
	|         | config                                                 |                          |         |         |                     |                     |
	| delete  | -p cilium-106238                                       | cilium-106238            | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	| start   | -p cert-expiration-972837                              | cert-expiration-972837   | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:06 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-947488                               | force-systemd-env-947488 | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-947488                            | force-systemd-env-947488 | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	| start   | -p cert-options-429275                                 | cert-options-429275      | jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:06 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-429275 ssh                                | cert-options-429275      | jenkins | v1.35.0 | 27 Jan 25 12:06 UTC | 27 Jan 25 12:06 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-429275 -- sudo                         | cert-options-429275      | jenkins | v1.35.0 | 27 Jan 25 12:06 UTC | 27 Jan 25 12:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-429275                                 | cert-options-429275      | jenkins | v1.35.0 | 27 Jan 25 12:06 UTC | 27 Jan 25 12:06 UTC |
	| start   | -p old-k8s-version-999803                              | old-k8s-version-999803   | jenkins | v1.35.0 | 27 Jan 25 12:06 UTC | 27 Jan 25 12:08 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-972837                              | cert-expiration-972837   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-999803        | old-k8s-version-999803   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-972837                              | cert-expiration-972837   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	| stop    | -p old-k8s-version-999803                              | old-k8s-version-999803   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p no-preload-835765                                   | no-preload-835765        | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:10 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-999803             | old-k8s-version-999803   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-999803                              | old-k8s-version-999803   | jenkins | v1.35.0 | 27 Jan 25 12:09 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-835765             | no-preload-835765        | jenkins | v1.35.0 | 27 Jan 25 12:10 UTC | 27 Jan 25 12:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-835765                                   | no-preload-835765        | jenkins | v1.35.0 | 27 Jan 25 12:10 UTC | 27 Jan 25 12:10 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-835765                  | no-preload-835765        | jenkins | v1.35.0 | 27 Jan 25 12:10 UTC | 27 Jan 25 12:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-835765                                   | no-preload-835765        | jenkins | v1.35.0 | 27 Jan 25 12:10 UTC | 27 Jan 25 12:15 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:10:38
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:10:38.803047 1104534 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:10:38.803261 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:10:38.803287 1104534 out.go:358] Setting ErrFile to fd 2...
	I0127 12:10:38.803306 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:10:38.803676 1104534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 12:10:38.804192 1104534 out.go:352] Setting JSON to false
	I0127 12:10:38.805735 1104534 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17584,"bootTime":1737962255,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 12:10:38.805850 1104534 start.go:139] virtualization:  
	I0127 12:10:38.808874 1104534 out.go:177] * [no-preload-835765] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 12:10:38.812232 1104534 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:10:38.812359 1104534 notify.go:220] Checking for updates...
	I0127 12:10:38.817591 1104534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:10:38.820375 1104534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 12:10:38.822943 1104534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 12:10:38.825500 1104534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 12:10:38.828053 1104534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:10:38.831436 1104534 config.go:182] Loaded profile config "no-preload-835765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:10:38.832050 1104534 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:10:38.863133 1104534 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:10:38.863258 1104534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:10:38.937831 1104534 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 12:10:38.922663769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:10:38.937948 1104534 docker.go:318] overlay module found
	I0127 12:10:38.941535 1104534 out.go:177] * Using the docker driver based on existing profile
	I0127 12:10:38.944224 1104534 start.go:297] selected driver: docker
	I0127 12:10:38.944287 1104534 start.go:901] validating driver "docker" against &{Name:no-preload-835765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-835765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:10:38.944448 1104534 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:10:38.945332 1104534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:10:39.006329 1104534 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-01-27 12:10:38.996157859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:10:39.006779 1104534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:10:39.006806 1104534 cni.go:84] Creating CNI manager for ""
	I0127 12:10:39.006848 1104534 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 12:10:39.006897 1104534 start.go:340] cluster config:
	{Name:no-preload-835765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-835765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:10:39.025134 1104534 out.go:177] * Starting "no-preload-835765" primary control-plane node in "no-preload-835765" cluster
	I0127 12:10:39.027812 1104534 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 12:10:39.030533 1104534 out.go:177] * Pulling base image v0.0.46 ...
	I0127 12:10:39.033294 1104534 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 12:10:39.033258 1104534 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:10:39.033520 1104534 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/config.json ...
	I0127 12:10:39.033856 1104534 cache.go:107] acquiring lock: {Name:mk292ee9ed5b43a9c61611c969682fac3248b2f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.033941 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 12:10:39.033950 1104534 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.511µs
	I0127 12:10:39.033959 1104534 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 12:10:39.033970 1104534 cache.go:107] acquiring lock: {Name:mka6724bad1f9cf143ea552dbb28a2755f784c65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.034008 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 12:10:39.034014 1104534 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.1" took 45.341µs
	I0127 12:10:39.034020 1104534 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 12:10:39.034029 1104534 cache.go:107] acquiring lock: {Name:mk1b9eab94843a4f62d36fad35d75d33bd51a03e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.034061 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 12:10:39.034066 1104534 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.1" took 37.981µs
	I0127 12:10:39.034072 1104534 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 12:10:39.034081 1104534 cache.go:107] acquiring lock: {Name:mk44a6dcc398514c761e3b9818a69e8bfbed5335 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.034112 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 12:10:39.034117 1104534 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.1" took 37.627µs
	I0127 12:10:39.034123 1104534 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 12:10:39.034135 1104534 cache.go:107] acquiring lock: {Name:mkeba9e5dabb87864f54b2e7d8405ff976418755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.034160 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 12:10:39.034170 1104534 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.1" took 34.321µs
	I0127 12:10:39.034175 1104534 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 12:10:39.034186 1104534 cache.go:107] acquiring lock: {Name:mkde6de3bc5db18a92ec49ef53a0c1340f50c6cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.034211 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0127 12:10:39.034220 1104534 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 31.04µs
	I0127 12:10:39.034225 1104534 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0127 12:10:39.034234 1104534 cache.go:107] acquiring lock: {Name:mkdf68f1ad8fd9f55ae85b3a69ce9b122c0e1f65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.034259 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 12:10:39.034264 1104534 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0" took 31.162µs
	I0127 12:10:39.034269 1104534 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 12:10:39.034278 1104534 cache.go:107] acquiring lock: {Name:mkc31583c11dfe3a18725ed06b477e250589266f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.034302 1104534 cache.go:115] /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 12:10:39.034307 1104534 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 30.03µs
	I0127 12:10:39.034312 1104534 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 12:10:39.034317 1104534 cache.go:87] Successfully saved all images to host disk.
	I0127 12:10:39.056936 1104534 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0127 12:10:39.056962 1104534 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0127 12:10:39.056983 1104534 cache.go:227] Successfully downloaded all kic artifacts
	I0127 12:10:39.057006 1104534 start.go:360] acquireMachinesLock for no-preload-835765: {Name:mkc979a89666647f77f4b877447000ec99cc9e8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:10:39.057086 1104534 start.go:364] duration metric: took 59.436µs to acquireMachinesLock for "no-preload-835765"
	I0127 12:10:39.057111 1104534 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:10:39.057125 1104534 fix.go:54] fixHost starting: 
	I0127 12:10:39.057394 1104534 cli_runner.go:164] Run: docker container inspect no-preload-835765 --format={{.State.Status}}
	I0127 12:10:39.074673 1104534 fix.go:112] recreateIfNeeded on no-preload-835765: state=Stopped err=<nil>
	W0127 12:10:39.074703 1104534 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:10:39.077693 1104534 out.go:177] * Restarting existing docker container for "no-preload-835765" ...
	I0127 12:10:39.240630 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:41.740112 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:39.081358 1104534 cli_runner.go:164] Run: docker start no-preload-835765
	I0127 12:10:39.412477 1104534 cli_runner.go:164] Run: docker container inspect no-preload-835765 --format={{.State.Status}}
	I0127 12:10:39.432039 1104534 kic.go:430] container "no-preload-835765" state is running.
	I0127 12:10:39.432409 1104534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-835765
	I0127 12:10:39.459975 1104534 profile.go:143] Saving config to /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/config.json ...
	I0127 12:10:39.460205 1104534 machine.go:93] provisionDockerMachine start ...
	I0127 12:10:39.460281 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:39.480477 1104534 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:39.480735 1104534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I0127 12:10:39.480745 1104534 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:10:39.481659 1104534 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0127 12:10:42.609324 1104534 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-835765
	
	I0127 12:10:42.609352 1104534 ubuntu.go:169] provisioning hostname "no-preload-835765"
	I0127 12:10:42.609414 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:42.628230 1104534 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:42.628478 1104534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I0127 12:10:42.628496 1104534 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-835765 && echo "no-preload-835765" | sudo tee /etc/hostname
	I0127 12:10:42.770642 1104534 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-835765
	
	I0127 12:10:42.770737 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:42.789875 1104534 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:42.790136 1104534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4132a0] 0x415ae0 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I0127 12:10:42.790159 1104534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-835765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-835765/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-835765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:10:42.913316 1104534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:10:42.913346 1104534 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20318-888339/.minikube CaCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20318-888339/.minikube}
	I0127 12:10:42.913370 1104534 ubuntu.go:177] setting up certificates
	I0127 12:10:42.913380 1104534 provision.go:84] configureAuth start
	I0127 12:10:42.913445 1104534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-835765
	I0127 12:10:42.933903 1104534 provision.go:143] copyHostCerts
	I0127 12:10:42.933972 1104534 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem, removing ...
	I0127 12:10:42.933981 1104534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem
	I0127 12:10:42.934062 1104534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/ca.pem (1082 bytes)
	I0127 12:10:42.934169 1104534 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem, removing ...
	I0127 12:10:42.934179 1104534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem
	I0127 12:10:42.934207 1104534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/cert.pem (1123 bytes)
	I0127 12:10:42.934276 1104534 exec_runner.go:144] found /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem, removing ...
	I0127 12:10:42.934286 1104534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem
	I0127 12:10:42.934309 1104534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20318-888339/.minikube/key.pem (1675 bytes)
	I0127 12:10:42.934379 1104534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem org=jenkins.no-preload-835765 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-835765]
	I0127 12:10:43.477185 1104534 provision.go:177] copyRemoteCerts
	I0127 12:10:43.477355 1104534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:10:43.477476 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:43.495765 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:43.586538 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 12:10:43.619605 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:10:43.646951 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 12:10:43.681462 1104534 provision.go:87] duration metric: took 768.06575ms to configureAuth
	I0127 12:10:43.681493 1104534 ubuntu.go:193] setting minikube options for container-runtime
	I0127 12:10:43.681698 1104534 config.go:182] Loaded profile config "no-preload-835765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:10:43.681719 1104534 machine.go:96] duration metric: took 4.2214979s to provisionDockerMachine
	I0127 12:10:43.681728 1104534 start.go:293] postStartSetup for "no-preload-835765" (driver="docker")
	I0127 12:10:43.681745 1104534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:10:43.681798 1104534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:10:43.681843 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:43.699040 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:43.791620 1104534 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:10:43.795348 1104534 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 12:10:43.795387 1104534 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 12:10:43.795398 1104534 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 12:10:43.795405 1104534 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0127 12:10:43.795416 1104534 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-888339/.minikube/addons for local assets ...
	I0127 12:10:43.795479 1104534 filesync.go:126] Scanning /home/jenkins/minikube-integration/20318-888339/.minikube/files for local assets ...
	I0127 12:10:43.795563 1104534 filesync.go:149] local asset: /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem -> 8937152.pem in /etc/ssl/certs
	I0127 12:10:43.795666 1104534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:10:43.805879 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem --> /etc/ssl/certs/8937152.pem (1708 bytes)
	I0127 12:10:43.832485 1104534 start.go:296] duration metric: took 150.734927ms for postStartSetup
	I0127 12:10:43.832563 1104534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:10:43.832611 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:43.850375 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:43.938544 1104534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 12:10:43.943488 1104534 fix.go:56] duration metric: took 4.886362521s for fixHost
	I0127 12:10:43.943514 1104534 start.go:83] releasing machines lock for "no-preload-835765", held for 4.886414918s
	I0127 12:10:43.943585 1104534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-835765
	I0127 12:10:43.961374 1104534 ssh_runner.go:195] Run: cat /version.json
	I0127 12:10:43.961453 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:43.961374 1104534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:10:43.961540 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:43.981835 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:43.989140 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:44.228184 1104534 ssh_runner.go:195] Run: systemctl --version
	I0127 12:10:44.232718 1104534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:10:44.237306 1104534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 12:10:44.256278 1104534 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 12:10:44.256385 1104534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:10:44.265277 1104534 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 12:10:44.265359 1104534 start.go:495] detecting cgroup driver to use...
	I0127 12:10:44.265399 1104534 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0127 12:10:44.265460 1104534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:10:44.279509 1104534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:10:44.296653 1104534 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:10:44.296774 1104534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:10:44.310240 1104534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:10:44.324914 1104534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:10:44.418938 1104534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:10:44.504491 1104534 docker.go:233] disabling docker service ...
	I0127 12:10:44.504571 1104534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:10:44.519281 1104534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:10:44.532313 1104534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:10:44.627377 1104534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:10:44.721340 1104534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:10:44.736126 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:10:44.756045 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:10:44.767553 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:10:44.777471 1104534 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:10:44.777596 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:10:44.790792 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:10:44.801343 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:10:44.812744 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:10:44.826111 1104534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:10:44.835823 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:10:44.846915 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:10:44.859280 1104534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:10:44.870391 1104534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:10:44.880582 1104534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:10:44.889420 1104534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:10:44.974974 1104534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:10:45.206192 1104534 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0127 12:10:45.206368 1104534 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0127 12:10:45.212092 1104534 start.go:563] Will wait 60s for crictl version
	I0127 12:10:45.212221 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:10:45.216766 1104534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:10:45.285758 1104534 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0127 12:10:45.285851 1104534 ssh_runner.go:195] Run: containerd --version
	I0127 12:10:45.312643 1104534 ssh_runner.go:195] Run: containerd --version
	I0127 12:10:45.349400 1104534 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
	I0127 12:10:45.352258 1104534 cli_runner.go:164] Run: docker network inspect no-preload-835765 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 12:10:45.373997 1104534 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0127 12:10:45.378566 1104534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:10:45.390499 1104534 kubeadm.go:883] updating cluster {Name:no-preload-835765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-835765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:10:45.390622 1104534 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 12:10:45.390673 1104534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:10:45.431522 1104534 containerd.go:627] all images are preloaded for containerd runtime.
	I0127 12:10:45.431547 1104534 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:10:45.431556 1104534 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.1 containerd true true} ...
	I0127 12:10:45.431662 1104534 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-835765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-835765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:10:45.431728 1104534 ssh_runner.go:195] Run: sudo crictl info
	I0127 12:10:45.470564 1104534 cni.go:84] Creating CNI manager for ""
	I0127 12:10:45.470638 1104534 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 12:10:45.470668 1104534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:10:45.470720 1104534 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-835765 NodeName:no-preload-835765 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:10:45.470894 1104534 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-835765"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:10:45.471004 1104534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:10:45.483523 1104534 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:10:45.483626 1104534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:10:45.493178 1104534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0127 12:10:45.511350 1104534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:10:45.530108 1104534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
	I0127 12:10:45.550896 1104534 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0127 12:10:45.554600 1104534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:10:45.565730 1104534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:10:45.651598 1104534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:10:45.666456 1104534 certs.go:68] Setting up /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765 for IP: 192.168.85.2
	I0127 12:10:45.666480 1104534 certs.go:194] generating shared ca certs ...
	I0127 12:10:45.666495 1104534 certs.go:226] acquiring lock for ca certs: {Name:mke15f79704ae0e83f911aa0e3f9c4b862da9341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:10:45.666626 1104534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20318-888339/.minikube/ca.key
	I0127 12:10:45.666682 1104534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.key
	I0127 12:10:45.666694 1104534 certs.go:256] generating profile certs ...
	I0127 12:10:45.666787 1104534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.key
	I0127 12:10:45.666850 1104534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/apiserver.key.a1be0775
	I0127 12:10:45.666903 1104534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/proxy-client.key
	I0127 12:10:45.667026 1104534 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715.pem (1338 bytes)
	W0127 12:10:45.667058 1104534 certs.go:480] ignoring /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715_empty.pem, impossibly tiny 0 bytes
	I0127 12:10:45.667071 1104534 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:10:45.667118 1104534 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/ca.pem (1082 bytes)
	I0127 12:10:45.667153 1104534 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:10:45.667178 1104534 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/certs/key.pem (1675 bytes)
	I0127 12:10:45.667228 1104534 certs.go:484] found cert: /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem (1708 bytes)
	I0127 12:10:45.668430 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:10:45.699255 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 12:10:45.725238 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:10:45.753007 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:10:45.780111 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:10:45.819608 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:10:45.854794 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:10:45.902427 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 12:10:45.937357 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/ssl/certs/8937152.pem --> /usr/share/ca-certificates/8937152.pem (1708 bytes)
	I0127 12:10:45.964302 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:10:45.992805 1104534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20318-888339/.minikube/certs/893715.pem --> /usr/share/ca-certificates/893715.pem (1338 bytes)
	I0127 12:10:46.028102 1104534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:10:46.050120 1104534 ssh_runner.go:195] Run: openssl version
	I0127 12:10:46.057725 1104534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:10:46.068624 1104534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:10:46.072491 1104534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 11:23 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:10:46.072560 1104534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:10:46.080487 1104534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:10:46.097736 1104534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/893715.pem && ln -fs /usr/share/ca-certificates/893715.pem /etc/ssl/certs/893715.pem"
	I0127 12:10:46.107964 1104534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/893715.pem
	I0127 12:10:46.111851 1104534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 11:31 /usr/share/ca-certificates/893715.pem
	I0127 12:10:46.111920 1104534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/893715.pem
	I0127 12:10:46.119607 1104534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/893715.pem /etc/ssl/certs/51391683.0"
	I0127 12:10:46.128975 1104534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8937152.pem && ln -fs /usr/share/ca-certificates/8937152.pem /etc/ssl/certs/8937152.pem"
	I0127 12:10:46.139301 1104534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8937152.pem
	I0127 12:10:46.143015 1104534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 11:31 /usr/share/ca-certificates/8937152.pem
	I0127 12:10:46.143110 1104534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8937152.pem
	I0127 12:10:46.150353 1104534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8937152.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:10:46.159785 1104534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:10:46.163389 1104534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:10:46.170391 1104534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:10:46.178687 1104534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:10:46.186905 1104534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:10:46.194508 1104534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:10:46.201357 1104534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:10:46.208238 1104534 kubeadm.go:392] StartCluster: {Name:no-preload-835765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-835765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:10:46.208339 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0127 12:10:46.208410 1104534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:10:46.252505 1104534 cri.go:89] found id: "9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af"
	I0127 12:10:46.252530 1104534 cri.go:89] found id: "b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9"
	I0127 12:10:46.252538 1104534 cri.go:89] found id: "0f07c42b866dc5ced085b913bc0032794fa31172869fd8b902b40cdea9c9c1e0"
	I0127 12:10:46.252550 1104534 cri.go:89] found id: "b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7"
	I0127 12:10:46.252554 1104534 cri.go:89] found id: "e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487"
	I0127 12:10:46.252557 1104534 cri.go:89] found id: "d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6"
	I0127 12:10:46.252561 1104534 cri.go:89] found id: "1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e"
	I0127 12:10:46.252564 1104534 cri.go:89] found id: "e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38"
	I0127 12:10:46.252567 1104534 cri.go:89] found id: ""
	I0127 12:10:46.252622 1104534 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W0127 12:10:46.265812 1104534 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-01-27T12:10:46Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I0127 12:10:46.265892 1104534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:10:46.274846 1104534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:10:46.274868 1104534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:10:46.274945 1104534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:10:46.284259 1104534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:10:46.284870 1104534 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-835765" does not appear in /home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 12:10:46.285177 1104534 kubeconfig.go:62] /home/jenkins/minikube-integration/20318-888339/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-835765" cluster setting kubeconfig missing "no-preload-835765" context setting]
	I0127 12:10:46.286476 1104534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/kubeconfig: {Name:mk75ddd380b783b9f157e482ffdcc29dbd635876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:10:46.287931 1104534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:10:46.315257 1104534 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0127 12:10:46.315293 1104534 kubeadm.go:597] duration metric: took 40.420055ms to restartPrimaryControlPlane
	I0127 12:10:46.315304 1104534 kubeadm.go:394] duration metric: took 107.076173ms to StartCluster
	I0127 12:10:46.315319 1104534 settings.go:142] acquiring lock: {Name:mk8e4620a376eeb900823ad35149c0dd6d301c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:10:46.315382 1104534 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 12:10:46.316309 1104534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20318-888339/kubeconfig: {Name:mk75ddd380b783b9f157e482ffdcc29dbd635876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:10:46.316518 1104534 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0127 12:10:46.316849 1104534 config.go:182] Loaded profile config "no-preload-835765": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:10:46.316914 1104534 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:10:46.316991 1104534 addons.go:69] Setting storage-provisioner=true in profile "no-preload-835765"
	I0127 12:10:46.317005 1104534 addons.go:238] Setting addon storage-provisioner=true in "no-preload-835765"
	W0127 12:10:46.317189 1104534 addons.go:247] addon storage-provisioner should already be in state true
	I0127 12:10:46.317222 1104534 host.go:66] Checking if "no-preload-835765" exists ...
	I0127 12:10:46.317156 1104534 addons.go:69] Setting dashboard=true in profile "no-preload-835765"
	I0127 12:10:46.317413 1104534 addons.go:238] Setting addon dashboard=true in "no-preload-835765"
	W0127 12:10:46.317422 1104534 addons.go:247] addon dashboard should already be in state true
	I0127 12:10:46.317475 1104534 host.go:66] Checking if "no-preload-835765" exists ...
	I0127 12:10:46.317693 1104534 cli_runner.go:164] Run: docker container inspect no-preload-835765 --format={{.State.Status}}
	I0127 12:10:46.318083 1104534 cli_runner.go:164] Run: docker container inspect no-preload-835765 --format={{.State.Status}}
	I0127 12:10:46.317171 1104534 addons.go:69] Setting default-storageclass=true in profile "no-preload-835765"
	I0127 12:10:46.318385 1104534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-835765"
	I0127 12:10:46.319257 1104534 cli_runner.go:164] Run: docker container inspect no-preload-835765 --format={{.State.Status}}
	I0127 12:10:46.317179 1104534 addons.go:69] Setting metrics-server=true in profile "no-preload-835765"
	I0127 12:10:46.322096 1104534 addons.go:238] Setting addon metrics-server=true in "no-preload-835765"
	W0127 12:10:46.322124 1104534 addons.go:247] addon metrics-server should already be in state true
	I0127 12:10:46.322182 1104534 host.go:66] Checking if "no-preload-835765" exists ...
	I0127 12:10:46.322678 1104534 cli_runner.go:164] Run: docker container inspect no-preload-835765 --format={{.State.Status}}
	I0127 12:10:46.326200 1104534 out.go:177] * Verifying Kubernetes components...
	I0127 12:10:46.337564 1104534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:10:46.384223 1104534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:10:46.388099 1104534 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:10:46.388123 1104534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:10:46.388192 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:46.413657 1104534 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 12:10:46.417054 1104534 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 12:10:46.419619 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 12:10:46.419643 1104534 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 12:10:46.419713 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:46.422453 1104534 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 12:10:43.741550 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:45.741618 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:47.745656 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:46.423512 1104534 addons.go:238] Setting addon default-storageclass=true in "no-preload-835765"
	W0127 12:10:46.424719 1104534 addons.go:247] addon default-storageclass should already be in state true
	I0127 12:10:46.424752 1104534 host.go:66] Checking if "no-preload-835765" exists ...
	I0127 12:10:46.425445 1104534 cli_runner.go:164] Run: docker container inspect no-preload-835765 --format={{.State.Status}}
	I0127 12:10:46.426237 1104534 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:10:46.426258 1104534 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:10:46.426312 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:46.434763 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:46.486009 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:46.486248 1104534 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:10:46.486262 1104534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:10:46.486332 1104534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-835765
	I0127 12:10:46.488421 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:46.519103 1104534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/no-preload-835765/id_rsa Username:docker}
	I0127 12:10:46.544668 1104534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:10:46.592618 1104534 node_ready.go:35] waiting up to 6m0s for node "no-preload-835765" to be "Ready" ...
	I0127 12:10:46.716165 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 12:10:46.716186 1104534 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 12:10:46.753719 1104534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:10:46.813417 1104534 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:10:46.813485 1104534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 12:10:46.834049 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 12:10:46.834129 1104534 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 12:10:46.835362 1104534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:10:46.909466 1104534 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:10:46.909568 1104534 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:10:46.969073 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 12:10:46.969144 1104534 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 12:10:47.066287 1104534 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:10:47.066370 1104534 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:10:47.122712 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 12:10:47.122795 1104534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 12:10:47.206676 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 12:10:47.206701 1104534 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0127 12:10:47.287967 1104534 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0127 12:10:47.288055 1104534 retry.go:31] will retry after 323.375103ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0127 12:10:47.315759 1104534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0127 12:10:47.397621 1104534 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0127 12:10:47.397704 1104534 retry.go:31] will retry after 338.042784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0127 12:10:47.408297 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 12:10:47.408371 1104534 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 12:10:47.513762 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 12:10:47.513788 1104534 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 12:10:47.591628 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 12:10:47.591654 1104534 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 12:10:47.611581 1104534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:10:47.714382 1104534 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:10:47.714419 1104534 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 12:10:47.736808 1104534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:10:47.874952 1104534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 12:10:50.240430 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:52.240688 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:51.803948 1104534 node_ready.go:49] node "no-preload-835765" has status "Ready":"True"
	I0127 12:10:51.803976 1104534 node_ready.go:38] duration metric: took 5.211318882s for node "no-preload-835765" to be "Ready" ...
	I0127 12:10:51.803986 1104534 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:10:51.883904 1104534 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8wfgw" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.924217 1104534 pod_ready.go:93] pod "coredns-668d6bf9bc-8wfgw" in "kube-system" namespace has status "Ready":"True"
	I0127 12:10:51.924245 1104534 pod_ready.go:82] duration metric: took 40.305515ms for pod "coredns-668d6bf9bc-8wfgw" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.924257 1104534 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.955276 1104534 pod_ready.go:93] pod "etcd-no-preload-835765" in "kube-system" namespace has status "Ready":"True"
	I0127 12:10:51.955302 1104534 pod_ready.go:82] duration metric: took 31.037455ms for pod "etcd-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.955320 1104534 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.969520 1104534 pod_ready.go:93] pod "kube-apiserver-no-preload-835765" in "kube-system" namespace has status "Ready":"True"
	I0127 12:10:51.969547 1104534 pod_ready.go:82] duration metric: took 14.219366ms for pod "kube-apiserver-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.969559 1104534 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.983523 1104534 pod_ready.go:93] pod "kube-controller-manager-no-preload-835765" in "kube-system" namespace has status "Ready":"True"
	I0127 12:10:51.983551 1104534 pod_ready.go:82] duration metric: took 13.982695ms for pod "kube-controller-manager-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:51.983564 1104534 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6j77q" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:52.015527 1104534 pod_ready.go:93] pod "kube-proxy-6j77q" in "kube-system" namespace has status "Ready":"True"
	I0127 12:10:52.015555 1104534 pod_ready.go:82] duration metric: took 31.98318ms for pod "kube-proxy-6j77q" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:52.015568 1104534 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:52.408574 1104534 pod_ready.go:93] pod "kube-scheduler-no-preload-835765" in "kube-system" namespace has status "Ready":"True"
	I0127 12:10:52.408601 1104534 pod_ready.go:82] duration metric: took 393.023075ms for pod "kube-scheduler-no-preload-835765" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:52.408615 1104534 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace to be "Ready" ...
	I0127 12:10:54.416302 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:55.325625 1104534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.009779225s)
	I0127 12:10:55.325661 1104534 addons.go:479] Verifying addon metrics-server=true in "no-preload-835765"
	I0127 12:10:55.363443 1104534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.751820329s)
	I0127 12:10:55.363502 1104534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.626650792s)
	I0127 12:10:55.363589 1104534 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.488605964s)
	I0127 12:10:55.366673 1104534 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-835765 addons enable metrics-server
	
	I0127 12:10:55.372353 1104534 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0127 12:10:54.741996 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:57.239927 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:55.374967 1104534 addons.go:514] duration metric: took 9.058048039s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0127 12:10:56.936967 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:59.240275 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:01.241298 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:10:59.414939 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:01.416389 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:03.426544 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:03.740357 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:06.239980 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:05.915405 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:07.915533 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:08.741190 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:10.741646 1099122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:11.741804 1099122 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"True"
	I0127 12:11:11.741830 1099122 pod_ready.go:82] duration metric: took 1m8.008379088s for pod "kube-controller-manager-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:11.741843 1099122 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nt2l9" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:11.747091 1099122 pod_ready.go:93] pod "kube-proxy-nt2l9" in "kube-system" namespace has status "Ready":"True"
	I0127 12:11:11.747117 1099122 pod_ready.go:82] duration metric: took 5.26649ms for pod "kube-proxy-nt2l9" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:11.747129 1099122 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:10.415644 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:12.914568 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:13.753603 1099122 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-999803" in "kube-system" namespace has status "Ready":"True"
	I0127 12:11:13.753625 1099122 pod_ready.go:82] duration metric: took 2.006488609s for pod "kube-scheduler-old-k8s-version-999803" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:13.753636 1099122 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace to be "Ready" ...
	I0127 12:11:15.760059 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:17.760104 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:15.414556 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:17.415296 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:20.259362 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:22.259503 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:19.915237 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:22.414659 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:24.265513 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:26.759839 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:24.414703 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:26.415033 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:29.259521 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:31.759761 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:28.915350 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:31.415056 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:33.415268 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:33.760790 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:36.259938 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:38.260402 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:35.415391 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:37.914181 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:40.260435 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:42.261372 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:39.914398 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:41.915143 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:44.759807 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:46.760139 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:44.416783 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:46.914380 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:49.259135 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:51.260408 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:48.915266 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:51.416394 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:53.761606 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:56.262295 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:53.914602 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:55.915025 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:58.414880 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:11:58.760329 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:00.760445 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:03.259143 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:00.416034 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:02.915057 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:05.260247 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:07.759197 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:05.414974 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:07.415077 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:09.760214 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:11.760909 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:09.914864 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:11.915239 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:14.259817 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:16.260506 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:14.414612 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:16.414845 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:18.766515 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:21.259066 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:23.259646 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:18.914824 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:21.414566 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:25.259764 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:27.259886 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:23.915105 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:26.414708 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:28.415044 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:29.758956 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:31.765446 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:30.914568 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:32.915072 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:34.259539 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:36.259690 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:38.259810 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:35.414995 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:37.917578 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:40.265756 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:42.759695 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:40.414897 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:42.914760 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:44.760001 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:47.264085 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:45.414031 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:47.414286 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:49.759794 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:51.760814 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:49.414937 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:51.415236 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:54.260573 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:56.261649 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:53.914997 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:55.915775 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:58.414565 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:12:58.760562 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:01.260514 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:00.414786 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:02.415180 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:03.760846 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:06.259692 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:08.259809 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:04.415454 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:06.415535 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:10.264009 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:12.760961 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:08.914755 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:11.414542 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:13.418303 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:15.259543 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:17.259672 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:15.914779 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:17.915250 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:19.266685 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:21.760466 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:20.414958 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:22.914725 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:23.760739 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:26.260232 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:24.915208 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:27.415392 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:28.759672 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:30.759737 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:32.760624 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:29.914226 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:31.920853 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:35.260200 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:37.760079 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:34.415357 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:36.415527 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:39.760771 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:42.261258 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:38.915074 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:40.915321 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:42.919024 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:44.760471 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:47.260741 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:45.414581 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:47.415198 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:49.760653 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:51.761150 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:49.914817 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:52.414019 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:54.260064 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:56.260168 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:54.414449 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:56.914336 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:58.759553 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:00.760997 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:03.259192 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:58.914511 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:00.914932 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:02.915016 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:05.260842 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:07.759902 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:04.916117 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:07.414717 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:09.760696 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:11.761676 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:09.914790 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:11.914897 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:14.259176 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:16.260538 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:13.915152 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:15.915220 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:17.916494 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:18.760162 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:20.761412 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:23.259212 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:20.417172 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:22.914389 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:25.259378 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:27.260207 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:25.414391 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:27.914990 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:29.761136 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:32.259719 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:30.415746 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:32.914428 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:34.260064 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:36.760335 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:34.924137 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:37.414702 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:38.760390 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:41.260552 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:43.260596 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:39.415266 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:41.914587 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:45.305357 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:47.761135 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:43.915452 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:46.415069 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:49.762322 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:52.262267 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:48.914614 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:51.415286 1104534 pod_ready.go:103] pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:52.409492 1104534 pod_ready.go:82] duration metric: took 4m0.000857266s for pod "metrics-server-f79f97bbb-kzh9d" in "kube-system" namespace to be "Ready" ...
	E0127 12:14:52.409551 1104534 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 12:14:52.409563 1104534 pod_ready.go:39] duration metric: took 4m0.605564558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:14:52.409581 1104534 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:14:52.409615 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:14:52.409692 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:14:52.468089 1104534 cri.go:89] found id: "1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c"
	I0127 12:14:52.468113 1104534 cri.go:89] found id: "1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e"
	I0127 12:14:52.468118 1104534 cri.go:89] found id: ""
	I0127 12:14:52.468126 1104534 logs.go:282] 2 containers: [1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c 1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e]
	I0127 12:14:52.468180 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.472041 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.475897 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 12:14:52.475966 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:14:52.521693 1104534 cri.go:89] found id: "483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f"
	I0127 12:14:52.521713 1104534 cri.go:89] found id: "e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38"
	I0127 12:14:52.521718 1104534 cri.go:89] found id: ""
	I0127 12:14:52.521725 1104534 logs.go:282] 2 containers: [483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38]
	I0127 12:14:52.521783 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.526901 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.530874 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 12:14:52.530943 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:14:52.581693 1104534 cri.go:89] found id: "509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25"
	I0127 12:14:52.581718 1104534 cri.go:89] found id: "9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af"
	I0127 12:14:52.581724 1104534 cri.go:89] found id: ""
	I0127 12:14:52.581732 1104534 logs.go:282] 2 containers: [509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25 9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af]
	I0127 12:14:52.581788 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.585405 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.588979 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:14:52.589105 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:14:52.628610 1104534 cri.go:89] found id: "9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863"
	I0127 12:14:52.628646 1104534 cri.go:89] found id: "d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6"
	I0127 12:14:52.628650 1104534 cri.go:89] found id: ""
	I0127 12:14:52.628657 1104534 logs.go:282] 2 containers: [9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863 d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6]
	I0127 12:14:52.628717 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.632373 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.635897 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:14:52.635973 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:14:52.674828 1104534 cri.go:89] found id: "cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388"
	I0127 12:14:52.674902 1104534 cri.go:89] found id: "b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7"
	I0127 12:14:52.674915 1104534 cri.go:89] found id: ""
	I0127 12:14:52.674924 1104534 logs.go:282] 2 containers: [cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388 b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7]
	I0127 12:14:52.674993 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.678483 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.681899 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:14:52.681984 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:14:52.722129 1104534 cri.go:89] found id: "32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191"
	I0127 12:14:52.722152 1104534 cri.go:89] found id: "e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487"
	I0127 12:14:52.722156 1104534 cri.go:89] found id: ""
	I0127 12:14:52.722164 1104534 logs.go:282] 2 containers: [32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191 e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487]
	I0127 12:14:52.722243 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.726009 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.729362 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 12:14:52.729431 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:14:52.781432 1104534 cri.go:89] found id: "fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6"
	I0127 12:14:52.781458 1104534 cri.go:89] found id: "b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9"
	I0127 12:14:52.781463 1104534 cri.go:89] found id: ""
	I0127 12:14:52.781471 1104534 logs.go:282] 2 containers: [fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6 b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9]
	I0127 12:14:52.781542 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.785249 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.790278 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:14:52.790398 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:14:52.837147 1104534 cri.go:89] found id: "cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5"
	I0127 12:14:52.837167 1104534 cri.go:89] found id: ""
	I0127 12:14:52.837175 1104534 logs.go:282] 1 containers: [cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5]
	I0127 12:14:52.837230 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.842098 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 12:14:52.842166 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 12:14:52.886268 1104534 cri.go:89] found id: "f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333"
	I0127 12:14:52.886298 1104534 cri.go:89] found id: "14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37"
	I0127 12:14:52.886304 1104534 cri.go:89] found id: ""
	I0127 12:14:52.886311 1104534 logs.go:282] 2 containers: [f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333 14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37]
	I0127 12:14:52.886381 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.890198 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:14:52.893680 1104534 logs.go:123] Gathering logs for kube-controller-manager [e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487] ...
	I0127 12:14:52.893701 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487"
	I0127 12:14:52.971483 1104534 logs.go:123] Gathering logs for kindnet [fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6] ...
	I0127 12:14:52.971516 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6"
	I0127 12:14:53.026229 1104534 logs.go:123] Gathering logs for storage-provisioner [f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333] ...
	I0127 12:14:53.026262 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333"
	I0127 12:14:53.074822 1104534 logs.go:123] Gathering logs for storage-provisioner [14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37] ...
	I0127 12:14:53.074854 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37"
	I0127 12:14:53.120261 1104534 logs.go:123] Gathering logs for container status ...
	I0127 12:14:53.120289 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:14:53.167689 1104534 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:14:53.167718 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:14:53.313357 1104534 logs.go:123] Gathering logs for coredns [509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25] ...
	I0127 12:14:53.313387 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25"
	I0127 12:14:53.355258 1104534 logs.go:123] Gathering logs for kube-proxy [b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7] ...
	I0127 12:14:53.355296 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7"
	I0127 12:14:53.398790 1104534 logs.go:123] Gathering logs for kube-controller-manager [32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191] ...
	I0127 12:14:53.398820 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191"
	I0127 12:14:53.467891 1104534 logs.go:123] Gathering logs for kindnet [b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9] ...
	I0127 12:14:53.467926 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9"
	I0127 12:14:53.516544 1104534 logs.go:123] Gathering logs for dmesg ...
	I0127 12:14:53.516576 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:14:53.535166 1104534 logs.go:123] Gathering logs for kube-apiserver [1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c] ...
	I0127 12:14:53.535196 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c"
	I0127 12:14:53.595276 1104534 logs.go:123] Gathering logs for kube-apiserver [1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e] ...
	I0127 12:14:53.595317 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e"
	I0127 12:14:53.647355 1104534 logs.go:123] Gathering logs for coredns [9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af] ...
	I0127 12:14:53.647390 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af"
	I0127 12:14:53.687311 1104534 logs.go:123] Gathering logs for kube-scheduler [9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863] ...
	I0127 12:14:53.687339 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863"
	I0127 12:14:53.724606 1104534 logs.go:123] Gathering logs for kube-scheduler [d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6] ...
	I0127 12:14:53.724634 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6"
	I0127 12:14:53.788367 1104534 logs.go:123] Gathering logs for kube-proxy [cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388] ...
	I0127 12:14:53.788405 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388"
	I0127 12:14:54.760292 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:57.260208 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:14:53.833338 1104534 logs.go:123] Gathering logs for kubernetes-dashboard [cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5] ...
	I0127 12:14:53.833416 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5"
	I0127 12:14:53.878744 1104534 logs.go:123] Gathering logs for kubelet ...
	I0127 12:14:53.878826 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:14:53.937189 1104534 logs.go:138] Found kubelet problem: Jan 27 12:10:57 no-preload-835765 kubelet[659]: I0127 12:10:57.568961     659 status_manager.go:890] "Failed to get status for pod" podUID="d66094c8-5c1a-4aaa-a14c-27954c4c5434" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-qwgv5" err="pods \"kubernetes-dashboard-7779f9b69b-qwgv5\" is forbidden: User \"system:node:no-preload-835765\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-835765' and this object"
	I0127 12:14:53.970519 1104534 logs.go:123] Gathering logs for etcd [483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f] ...
	I0127 12:14:53.970556 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f"
	I0127 12:14:54.019375 1104534 logs.go:123] Gathering logs for etcd [e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38] ...
	I0127 12:14:54.019407 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38"
	I0127 12:14:54.078194 1104534 logs.go:123] Gathering logs for containerd ...
	I0127 12:14:54.078224 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 12:14:54.151084 1104534 out.go:358] Setting ErrFile to fd 2...
	I0127 12:14:54.151122 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 12:14:54.151198 1104534 out.go:270] X Problems detected in kubelet:
	W0127 12:14:54.151209 1104534 out.go:270]   Jan 27 12:10:57 no-preload-835765 kubelet[659]: I0127 12:10:57.568961     659 status_manager.go:890] "Failed to get status for pod" podUID="d66094c8-5c1a-4aaa-a14c-27954c4c5434" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-qwgv5" err="pods \"kubernetes-dashboard-7779f9b69b-qwgv5\" is forbidden: User \"system:node:no-preload-835765\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-835765' and this object"
	I0127 12:14:54.151215 1104534 out.go:358] Setting ErrFile to fd 2...
	I0127 12:14:54.151230 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:14:59.760442 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:01.760706 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:04.260789 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:06.759923 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:04.153767 1104534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:15:04.165808 1104534 api_server.go:72] duration metric: took 4m17.84925442s to wait for apiserver process to appear ...
	I0127 12:15:04.165833 1104534 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:15:04.165867 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:15:04.165926 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:15:04.206360 1104534 cri.go:89] found id: "1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c"
	I0127 12:15:04.206379 1104534 cri.go:89] found id: "1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e"
	I0127 12:15:04.206384 1104534 cri.go:89] found id: ""
	I0127 12:15:04.206391 1104534 logs.go:282] 2 containers: [1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c 1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e]
	I0127 12:15:04.206446 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.210163 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.213563 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 12:15:04.213630 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:15:04.265694 1104534 cri.go:89] found id: "483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f"
	I0127 12:15:04.265716 1104534 cri.go:89] found id: "e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38"
	I0127 12:15:04.265721 1104534 cri.go:89] found id: ""
	I0127 12:15:04.265728 1104534 logs.go:282] 2 containers: [483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38]
	I0127 12:15:04.265796 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.269778 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.273247 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 12:15:04.273325 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:15:04.325226 1104534 cri.go:89] found id: "509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25"
	I0127 12:15:04.325246 1104534 cri.go:89] found id: "9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af"
	I0127 12:15:04.325250 1104534 cri.go:89] found id: ""
	I0127 12:15:04.325257 1104534 logs.go:282] 2 containers: [509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25 9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af]
	I0127 12:15:04.325313 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.329161 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.332948 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:15:04.333120 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:15:04.376122 1104534 cri.go:89] found id: "9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863"
	I0127 12:15:04.376144 1104534 cri.go:89] found id: "d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6"
	I0127 12:15:04.376148 1104534 cri.go:89] found id: ""
	I0127 12:15:04.376155 1104534 logs.go:282] 2 containers: [9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863 d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6]
	I0127 12:15:04.376214 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.379955 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.383516 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:15:04.383602 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:15:04.421817 1104534 cri.go:89] found id: "cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388"
	I0127 12:15:04.421892 1104534 cri.go:89] found id: "b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7"
	I0127 12:15:04.421904 1104534 cri.go:89] found id: ""
	I0127 12:15:04.421912 1104534 logs.go:282] 2 containers: [cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388 b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7]
	I0127 12:15:04.421983 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.427341 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.431155 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:15:04.431250 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:15:04.469681 1104534 cri.go:89] found id: "32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191"
	I0127 12:15:04.469703 1104534 cri.go:89] found id: "e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487"
	I0127 12:15:04.469708 1104534 cri.go:89] found id: ""
	I0127 12:15:04.469715 1104534 logs.go:282] 2 containers: [32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191 e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487]
	I0127 12:15:04.469818 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.473314 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.476616 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 12:15:04.476707 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:15:04.526341 1104534 cri.go:89] found id: "fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6"
	I0127 12:15:04.526361 1104534 cri.go:89] found id: "b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9"
	I0127 12:15:04.526366 1104534 cri.go:89] found id: ""
	I0127 12:15:04.526373 1104534 logs.go:282] 2 containers: [fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6 b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9]
	I0127 12:15:04.526429 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.530577 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.534415 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 12:15:04.534511 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 12:15:04.580996 1104534 cri.go:89] found id: "f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333"
	I0127 12:15:04.581023 1104534 cri.go:89] found id: "14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37"
	I0127 12:15:04.581066 1104534 cri.go:89] found id: ""
	I0127 12:15:04.581073 1104534 logs.go:282] 2 containers: [f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333 14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37]
	I0127 12:15:04.581130 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.584898 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.588862 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:15:04.588937 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:15:04.630840 1104534 cri.go:89] found id: "cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5"
	I0127 12:15:04.630862 1104534 cri.go:89] found id: ""
	I0127 12:15:04.630869 1104534 logs.go:282] 1 containers: [cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5]
	I0127 12:15:04.630943 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:04.634596 1104534 logs.go:123] Gathering logs for kube-apiserver [1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c] ...
	I0127 12:15:04.634620 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c"
	I0127 12:15:04.702788 1104534 logs.go:123] Gathering logs for coredns [509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25] ...
	I0127 12:15:04.702826 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25"
	I0127 12:15:04.750031 1104534 logs.go:123] Gathering logs for kindnet [fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6] ...
	I0127 12:15:04.750128 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6"
	I0127 12:15:04.805002 1104534 logs.go:123] Gathering logs for containerd ...
	I0127 12:15:04.805124 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 12:15:04.874471 1104534 logs.go:123] Gathering logs for storage-provisioner [f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333] ...
	I0127 12:15:04.874505 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333"
	I0127 12:15:04.915226 1104534 logs.go:123] Gathering logs for storage-provisioner [14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37] ...
	I0127 12:15:04.915253 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37"
	I0127 12:15:04.965916 1104534 logs.go:123] Gathering logs for kubernetes-dashboard [cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5] ...
	I0127 12:15:04.965943 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5"
	I0127 12:15:05.010093 1104534 logs.go:123] Gathering logs for kube-apiserver [1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e] ...
	I0127 12:15:05.010197 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e"
	I0127 12:15:05.064758 1104534 logs.go:123] Gathering logs for coredns [9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af] ...
	I0127 12:15:05.064797 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af"
	I0127 12:15:05.122869 1104534 logs.go:123] Gathering logs for kube-proxy [cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388] ...
	I0127 12:15:05.122939 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388"
	I0127 12:15:05.165959 1104534 logs.go:123] Gathering logs for kube-controller-manager [32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191] ...
	I0127 12:15:05.165995 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191"
	I0127 12:15:05.259097 1104534 logs.go:123] Gathering logs for dmesg ...
	I0127 12:15:05.259176 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:15:05.279066 1104534 logs.go:123] Gathering logs for kube-scheduler [9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863] ...
	I0127 12:15:05.279149 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863"
	I0127 12:15:05.327280 1104534 logs.go:123] Gathering logs for kube-proxy [b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7] ...
	I0127 12:15:05.327350 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7"
	I0127 12:15:05.377189 1104534 logs.go:123] Gathering logs for kindnet [b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9] ...
	I0127 12:15:05.377220 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9"
	I0127 12:15:05.440485 1104534 logs.go:123] Gathering logs for kube-scheduler [d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6] ...
	I0127 12:15:05.440514 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6"
	I0127 12:15:05.506648 1104534 logs.go:123] Gathering logs for kube-controller-manager [e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487] ...
	I0127 12:15:05.506681 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487"
	I0127 12:15:05.583948 1104534 logs.go:123] Gathering logs for container status ...
	I0127 12:15:05.583988 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:15:05.635807 1104534 logs.go:123] Gathering logs for kubelet ...
	I0127 12:15:05.635878 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:15:05.690586 1104534 logs.go:138] Found kubelet problem: Jan 27 12:10:57 no-preload-835765 kubelet[659]: I0127 12:10:57.568961     659 status_manager.go:890] "Failed to get status for pod" podUID="d66094c8-5c1a-4aaa-a14c-27954c4c5434" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-qwgv5" err="pods \"kubernetes-dashboard-7779f9b69b-qwgv5\" is forbidden: User \"system:node:no-preload-835765\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-835765' and this object"
	I0127 12:15:05.725729 1104534 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:15:05.725781 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:15:05.860466 1104534 logs.go:123] Gathering logs for etcd [483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f] ...
	I0127 12:15:05.860496 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f"
	I0127 12:15:05.931631 1104534 logs.go:123] Gathering logs for etcd [e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38] ...
	I0127 12:15:05.931676 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38"
	I0127 12:15:05.991931 1104534 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:05.991962 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 12:15:05.992019 1104534 out.go:270] X Problems detected in kubelet:
	W0127 12:15:05.992036 1104534 out.go:270]   Jan 27 12:10:57 no-preload-835765 kubelet[659]: I0127 12:10:57.568961     659 status_manager.go:890] "Failed to get status for pod" podUID="d66094c8-5c1a-4aaa-a14c-27954c4c5434" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-qwgv5" err="pods \"kubernetes-dashboard-7779f9b69b-qwgv5\" is forbidden: User \"system:node:no-preload-835765\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-835765' and this object"
	I0127 12:15:05.992042 1104534 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:05.992048 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:15:08.761236 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:10.761497 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:13.259288 1099122 pod_ready.go:103] pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace has status "Ready":"False"
	I0127 12:15:13.754136 1099122 pod_ready.go:82] duration metric: took 4m0.000481802s for pod "metrics-server-9975d5f86-qzhdd" in "kube-system" namespace to be "Ready" ...
	E0127 12:15:13.754168 1099122 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 12:15:13.754179 1099122 pod_ready.go:39] duration metric: took 5m20.463611828s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:15:13.754196 1099122 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:15:13.754230 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:15:13.754302 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:15:13.794220 1099122 cri.go:89] found id: "709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:13.794243 1099122 cri.go:89] found id: "f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:13.794248 1099122 cri.go:89] found id: ""
	I0127 12:15:13.794255 1099122 logs.go:282] 2 containers: [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e]
	I0127 12:15:13.794338 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.797981 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.801452 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 12:15:13.801523 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:15:13.839152 1099122 cri.go:89] found id: "8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:13.839179 1099122 cri.go:89] found id: "2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:13.839185 1099122 cri.go:89] found id: ""
	I0127 12:15:13.839192 1099122 logs.go:282] 2 containers: [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1]
	I0127 12:15:13.839249 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.842927 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.846323 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 12:15:13.846397 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:15:13.884753 1099122 cri.go:89] found id: "66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:13.884776 1099122 cri.go:89] found id: "006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:13.884781 1099122 cri.go:89] found id: ""
	I0127 12:15:13.884787 1099122 logs.go:282] 2 containers: [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863]
	I0127 12:15:13.884849 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.888585 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.892544 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:15:13.892620 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:15:13.935201 1099122 cri.go:89] found id: "15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:13.935265 1099122 cri.go:89] found id: "8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:13.935275 1099122 cri.go:89] found id: ""
	I0127 12:15:13.935282 1099122 logs.go:282] 2 containers: [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b]
	I0127 12:15:13.935348 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.938912 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.942321 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:15:13.942420 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:15:13.982013 1099122 cri.go:89] found id: "244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:13.982037 1099122 cri.go:89] found id: "69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:13.982042 1099122 cri.go:89] found id: ""
	I0127 12:15:13.982049 1099122 logs.go:282] 2 containers: [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d]
	I0127 12:15:13.982107 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.985808 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:13.989196 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:15:13.989297 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:15:14.034072 1099122 cri.go:89] found id: "35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:14.034096 1099122 cri.go:89] found id: "24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:14.034103 1099122 cri.go:89] found id: ""
	I0127 12:15:14.034110 1099122 logs.go:282] 2 containers: [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84]
	I0127 12:15:14.034175 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.038229 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.041981 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 12:15:14.042087 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:15:14.096637 1099122 cri.go:89] found id: "8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:14.096662 1099122 cri.go:89] found id: "92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:14.096667 1099122 cri.go:89] found id: ""
	I0127 12:15:14.096674 1099122 logs.go:282] 2 containers: [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f]
	I0127 12:15:14.096735 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.100700 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.104367 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:15:14.104440 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:15:14.142295 1099122 cri.go:89] found id: "d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:14.142318 1099122 cri.go:89] found id: ""
	I0127 12:15:14.142341 1099122 logs.go:282] 1 containers: [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6]
	I0127 12:15:14.142395 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.145773 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 12:15:14.145852 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 12:15:14.184211 1099122 cri.go:89] found id: "1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:14.184233 1099122 cri.go:89] found id: "072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:14.184239 1099122 cri.go:89] found id: ""
	I0127 12:15:14.184246 1099122 logs.go:282] 2 containers: [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4]
	I0127 12:15:14.184300 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.187804 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:14.191031 1099122 logs.go:123] Gathering logs for dmesg ...
	I0127 12:15:14.191103 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:15:14.210127 1099122 logs.go:123] Gathering logs for etcd [2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1] ...
	I0127 12:15:14.210159 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:14.250486 1099122 logs.go:123] Gathering logs for kube-controller-manager [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d] ...
	I0127 12:15:14.250517 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:14.310627 1099122 logs.go:123] Gathering logs for kubernetes-dashboard [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6] ...
	I0127 12:15:14.310663 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:14.357158 1099122 logs.go:123] Gathering logs for kubelet ...
	I0127 12:15:14.357187 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:15:14.421142 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.065326     663 reflector.go:138] object-"kube-system"/"metrics-server-token-827qh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-827qh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.421398 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.068245     663 reflector.go:138] object-"kube-system"/"kindnet-token-jxc27": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jxc27" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.421622 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.072508     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-pzcvk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-pzcvk" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.421857 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.092394     663 reflector.go:138] object-"kube-system"/"coredns-token-m2lsh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2lsh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422066 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.102861     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422281 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.106409     663 reflector.go:138] object-"default"/"default-token-pmbfm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pmbfm" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422509 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135445     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bbnlz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bbnlz" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.422714 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135737     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:14.431944 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:55 old-k8s-version-999803 kubelet[663]: E0127 12:09:55.695476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.432135 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:56 old-k8s-version-999803 kubelet[663]: E0127 12:09:56.587407     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.434973 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:08 old-k8s-version-999803 kubelet[663]: E0127 12:10:08.310539     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.436943 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:18 old-k8s-version-999803 kubelet[663]: E0127 12:10:18.685285     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.437410 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:19 old-k8s-version-999803 kubelet[663]: E0127 12:10:19.697364     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.437932 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:23 old-k8s-version-999803 kubelet[663]: E0127 12:10:23.289410     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.438369 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:27 old-k8s-version-999803 kubelet[663]: E0127 12:10:27.725508     663 pod_workers.go:191] Error syncing pod f73574be-9aec-4a33-ac88-97d900488a22 ("storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"
	W0127 12:15:14.439000 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:29 old-k8s-version-999803 kubelet[663]: E0127 12:10:29.743840     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.441866 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:35 old-k8s-version-999803 kubelet[663]: E0127 12:10:35.298365     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.442236 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:38 old-k8s-version-999803 kubelet[663]: E0127 12:10:38.666640     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.442554 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:46 old-k8s-version-999803 kubelet[663]: E0127 12:10:46.289557     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.443148 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:53 old-k8s-version-999803 kubelet[663]: E0127 12:10:53.813459     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.443334 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:57 old-k8s-version-999803 kubelet[663]: E0127 12:10:57.289244     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.443693 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:58 old-k8s-version-999803 kubelet[663]: E0127 12:10:58.662563     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.443879 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.289633     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.444208 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.290664     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.446683 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:21 old-k8s-version-999803 kubelet[663]: E0127 12:11:21.295133     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.447018 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:25 old-k8s-version-999803 kubelet[663]: E0127 12:11:25.289132     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.447202 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:36 old-k8s-version-999803 kubelet[663]: E0127 12:11:36.291476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.447862 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:40 old-k8s-version-999803 kubelet[663]: E0127 12:11:40.940519     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.448197 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:48 old-k8s-version-999803 kubelet[663]: E0127 12:11:48.662783     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.448380 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:50 old-k8s-version-999803 kubelet[663]: E0127 12:11:50.289726     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.448722 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.288851     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.448905 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.289979     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.449125 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:14 old-k8s-version-999803 kubelet[663]: E0127 12:12:14.295027     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.449483 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:18 old-k8s-version-999803 kubelet[663]: E0127 12:12:18.288843     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.449672 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:26 old-k8s-version-999803 kubelet[663]: E0127 12:12:26.289650     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.450001 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:30 old-k8s-version-999803 kubelet[663]: E0127 12:12:30.289329     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.450184 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:37 old-k8s-version-999803 kubelet[663]: E0127 12:12:37.289235     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.450509 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:44 old-k8s-version-999803 kubelet[663]: E0127 12:12:44.288931     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.452938 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:50 old-k8s-version-999803 kubelet[663]: E0127 12:12:50.299810     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:14.453274 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:57 old-k8s-version-999803 kubelet[663]: E0127 12:12:57.288813     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.453460 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:03 old-k8s-version-999803 kubelet[663]: E0127 12:13:03.289391     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.454055 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:11 old-k8s-version-999803 kubelet[663]: E0127 12:13:11.199586     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.454245 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:14 old-k8s-version-999803 kubelet[663]: E0127 12:13:14.289503     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.454570 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:18 old-k8s-version-999803 kubelet[663]: E0127 12:13:18.662572     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.454756 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:29 old-k8s-version-999803 kubelet[663]: E0127 12:13:29.289301     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.455083 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:31 old-k8s-version-999803 kubelet[663]: E0127 12:13:31.288795     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.455286 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.289460     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.455615 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.290963     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.455808 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:53 old-k8s-version-999803 kubelet[663]: E0127 12:13:53.289152     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.456133 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:57 old-k8s-version-999803 kubelet[663]: E0127 12:13:57.288829     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.456320 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:07 old-k8s-version-999803 kubelet[663]: E0127 12:14:07.289177     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.456652 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:12 old-k8s-version-999803 kubelet[663]: E0127 12:14:12.289332     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.456835 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:21 old-k8s-version-999803 kubelet[663]: E0127 12:14:21.289169     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.457166 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:26 old-k8s-version-999803 kubelet[663]: E0127 12:14:26.288817     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.457351 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:32 old-k8s-version-999803 kubelet[663]: E0127 12:14:32.289382     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.457676 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:37 old-k8s-version-999803 kubelet[663]: E0127 12:14:37.288718     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.457860 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.458184 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.458369 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:14.458694 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:14.458877 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:14.458888 1099122 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:15:14.458906 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:15:14.600554 1099122 logs.go:123] Gathering logs for coredns [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab] ...
	I0127 12:15:14.600589 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:14.642894 1099122 logs.go:123] Gathering logs for kube-controller-manager [24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84] ...
	I0127 12:15:14.642924 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:14.708414 1099122 logs.go:123] Gathering logs for etcd [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9] ...
	I0127 12:15:14.708446 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:14.750297 1099122 logs.go:123] Gathering logs for coredns [006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863] ...
	I0127 12:15:14.750327 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:14.787004 1099122 logs.go:123] Gathering logs for kube-scheduler [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47] ...
	I0127 12:15:14.787031 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:14.828360 1099122 logs.go:123] Gathering logs for kube-proxy [69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d] ...
	I0127 12:15:14.828389 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:14.873162 1099122 logs.go:123] Gathering logs for kindnet [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026] ...
	I0127 12:15:14.873189 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:14.921463 1099122 logs.go:123] Gathering logs for kindnet [92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f] ...
	I0127 12:15:14.921495 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:14.974595 1099122 logs.go:123] Gathering logs for storage-provisioner [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609] ...
	I0127 12:15:14.974623 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:15.034157 1099122 logs.go:123] Gathering logs for storage-provisioner [072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4] ...
	I0127 12:15:15.034189 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:15.077801 1099122 logs.go:123] Gathering logs for kube-apiserver [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f] ...
	I0127 12:15:15.077829 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:15.150398 1099122 logs.go:123] Gathering logs for kube-apiserver [f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e] ...
	I0127 12:15:15.150475 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:15.219967 1099122 logs.go:123] Gathering logs for kube-scheduler [8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b] ...
	I0127 12:15:15.220021 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:15.267003 1099122 logs.go:123] Gathering logs for kube-proxy [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56] ...
	I0127 12:15:15.267075 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:15.305261 1099122 logs.go:123] Gathering logs for containerd ...
	I0127 12:15:15.305350 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 12:15:15.379123 1099122 logs.go:123] Gathering logs for container status ...
	I0127 12:15:15.379163 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:15:15.437988 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:15.438018 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 12:15:15.438072 1099122 out.go:270] X Problems detected in kubelet:
	W0127 12:15:15.438088 1099122 out.go:270]   Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:15.438101 1099122 out.go:270]   Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:15.438126 1099122 out.go:270]   Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:15.438132 1099122 out.go:270]   Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:15.438137 1099122 out.go:270]   Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:15.438142 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:15.438151 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:15:15.993224 1104534 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0127 12:15:16.001545 1104534 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0127 12:15:16.002581 1104534 api_server.go:141] control plane version: v1.32.1
	I0127 12:15:16.002607 1104534 api_server.go:131] duration metric: took 11.836766659s to wait for apiserver health ...
	I0127 12:15:16.002615 1104534 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:15:16.002637 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:15:16.002699 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:15:16.041659 1104534 cri.go:89] found id: "1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c"
	I0127 12:15:16.041685 1104534 cri.go:89] found id: "1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e"
	I0127 12:15:16.041691 1104534 cri.go:89] found id: ""
	I0127 12:15:16.041698 1104534 logs.go:282] 2 containers: [1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c 1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e]
	I0127 12:15:16.041761 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.045565 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.049192 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 12:15:16.049269 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:15:16.103251 1104534 cri.go:89] found id: "483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f"
	I0127 12:15:16.103275 1104534 cri.go:89] found id: "e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38"
	I0127 12:15:16.103285 1104534 cri.go:89] found id: ""
	I0127 12:15:16.103293 1104534 logs.go:282] 2 containers: [483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38]
	I0127 12:15:16.103358 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.107119 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.110762 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 12:15:16.110835 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:15:16.150856 1104534 cri.go:89] found id: "509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25"
	I0127 12:15:16.150878 1104534 cri.go:89] found id: "9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af"
	I0127 12:15:16.150882 1104534 cri.go:89] found id: ""
	I0127 12:15:16.150889 1104534 logs.go:282] 2 containers: [509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25 9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af]
	I0127 12:15:16.150947 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.155002 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.158414 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:15:16.158489 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:15:16.194786 1104534 cri.go:89] found id: "9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863"
	I0127 12:15:16.194807 1104534 cri.go:89] found id: "d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6"
	I0127 12:15:16.194812 1104534 cri.go:89] found id: ""
	I0127 12:15:16.194819 1104534 logs.go:282] 2 containers: [9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863 d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6]
	I0127 12:15:16.194902 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.198584 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.202276 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:15:16.202371 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:15:16.249547 1104534 cri.go:89] found id: "cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388"
	I0127 12:15:16.249615 1104534 cri.go:89] found id: "b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7"
	I0127 12:15:16.249633 1104534 cri.go:89] found id: ""
	I0127 12:15:16.249655 1104534 logs.go:282] 2 containers: [cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388 b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7]
	I0127 12:15:16.249747 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.253544 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.257480 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:15:16.257554 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:15:16.313355 1104534 cri.go:89] found id: "32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191"
	I0127 12:15:16.313428 1104534 cri.go:89] found id: "e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487"
	I0127 12:15:16.313447 1104534 cri.go:89] found id: ""
	I0127 12:15:16.313468 1104534 logs.go:282] 2 containers: [32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191 e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487]
	I0127 12:15:16.313569 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.317728 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.321603 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 12:15:16.321703 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:15:16.361633 1104534 cri.go:89] found id: "fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6"
	I0127 12:15:16.361711 1104534 cri.go:89] found id: "b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9"
	I0127 12:15:16.361723 1104534 cri.go:89] found id: ""
	I0127 12:15:16.361732 1104534 logs.go:282] 2 containers: [fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6 b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9]
	I0127 12:15:16.361791 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.365379 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.368518 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 12:15:16.368599 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 12:15:16.407080 1104534 cri.go:89] found id: "f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333"
	I0127 12:15:16.407103 1104534 cri.go:89] found id: "14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37"
	I0127 12:15:16.407108 1104534 cri.go:89] found id: ""
	I0127 12:15:16.407115 1104534 logs.go:282] 2 containers: [f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333 14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37]
	I0127 12:15:16.407175 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.410860 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.414418 1104534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:15:16.414513 1104534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:15:16.474028 1104534 cri.go:89] found id: "cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5"
	I0127 12:15:16.474100 1104534 cri.go:89] found id: ""
	I0127 12:15:16.474115 1104534 logs.go:282] 1 containers: [cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5]
	I0127 12:15:16.474178 1104534 ssh_runner.go:195] Run: which crictl
	I0127 12:15:16.478146 1104534 logs.go:123] Gathering logs for kube-proxy [b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7] ...
	I0127 12:15:16.478174 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b4e9052c1fedabc674c2222dc1c4660a4446f3e68905594dd3752b87bfec8ab7"
	I0127 12:15:16.519452 1104534 logs.go:123] Gathering logs for kindnet [b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9] ...
	I0127 12:15:16.519482 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b01817eabb909bdf2fca7c99282d81be3e149b9e5e514379ae75fbbe909180a9"
	I0127 12:15:16.590945 1104534 logs.go:123] Gathering logs for containerd ...
	I0127 12:15:16.590973 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 12:15:16.664839 1104534 logs.go:123] Gathering logs for kube-apiserver [1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c] ...
	I0127 12:15:16.664875 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2e3b1476e69079c1ee9e25f9bb356cc8e52fbb728ebaf0da89cee0bdfbf91c"
	I0127 12:15:16.732116 1104534 logs.go:123] Gathering logs for etcd [483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f] ...
	I0127 12:15:16.732151 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 483e568c5ee91d6fd7c9f00d6dc9c8fb56f0a60f9a677db9f74189916de6377f"
	I0127 12:15:16.792124 1104534 logs.go:123] Gathering logs for kube-scheduler [d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6] ...
	I0127 12:15:16.792156 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3ca08813f7f28675608fdff26b822fa26ae938aa3184f12ed60a38df6a8e9b6"
	I0127 12:15:16.857786 1104534 logs.go:123] Gathering logs for kube-proxy [cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388] ...
	I0127 12:15:16.857816 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb5eb19b75d1d15f926d57747591a2cfaa9e12237e4d321718f0a161969c4388"
	I0127 12:15:16.904178 1104534 logs.go:123] Gathering logs for kube-controller-manager [32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191] ...
	I0127 12:15:16.904205 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32aa83148612a5a0d3262a4a0c01e05aafb6193b39935ef9b32e34b3731c4191"
	I0127 12:15:16.992684 1104534 logs.go:123] Gathering logs for kubelet ...
	I0127 12:15:16.992719 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:15:17.052148 1104534 logs.go:138] Found kubelet problem: Jan 27 12:10:57 no-preload-835765 kubelet[659]: I0127 12:10:57.568961     659 status_manager.go:890] "Failed to get status for pod" podUID="d66094c8-5c1a-4aaa-a14c-27954c4c5434" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-qwgv5" err="pods \"kubernetes-dashboard-7779f9b69b-qwgv5\" is forbidden: User \"system:node:no-preload-835765\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-835765' and this object"
	I0127 12:15:17.088519 1104534 logs.go:123] Gathering logs for dmesg ...
	I0127 12:15:17.088558 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:15:17.105210 1104534 logs.go:123] Gathering logs for coredns [9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af] ...
	I0127 12:15:17.105243 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cd164c0fe739bf31cb613cada307e3ec8b9a582e74d7893fce085f9c45ce8af"
	I0127 12:15:17.144493 1104534 logs.go:123] Gathering logs for storage-provisioner [f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333] ...
	I0127 12:15:17.144522 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6f404f21b109084490d079cfcc8b0371e82a1d33476cb90d3da0ee6ed14f333"
	I0127 12:15:17.181902 1104534 logs.go:123] Gathering logs for storage-provisioner [14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37] ...
	I0127 12:15:17.181928 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14d55d7c85202cf7f3b7019c6c77fd30c98972e2a8dc201b5e49061c93dfed37"
	I0127 12:15:17.227045 1104534 logs.go:123] Gathering logs for kube-controller-manager [e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487] ...
	I0127 12:15:17.227078 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e451ed26c1c38fb8a0a7cc5ea845d7583691c679eea5deb9dbc55465102cf487"
	I0127 12:15:17.284499 1104534 logs.go:123] Gathering logs for kindnet [fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6] ...
	I0127 12:15:17.284529 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7a077525a0fd6934b044f8aa8e1a14646463a0d4a2018526c7fc86f3a518e6"
	I0127 12:15:17.329309 1104534 logs.go:123] Gathering logs for kubernetes-dashboard [cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5] ...
	I0127 12:15:17.329337 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbf26bb26b863bdc58d25df8d595193c8e7929be38fe3a0662daa76fe04144d5"
	I0127 12:15:17.380574 1104534 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:15:17.380647 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:15:17.497130 1104534 logs.go:123] Gathering logs for kube-apiserver [1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e] ...
	I0127 12:15:17.497159 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f35da231fa433c1460f1cd99f1f2b6dd0431b163618dc1e87a2cf3a793ebd2e"
	I0127 12:15:17.560606 1104534 logs.go:123] Gathering logs for etcd [e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38] ...
	I0127 12:15:17.560879 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b517158b9240c94112d076fb03688bf738388758d1146c807b822e1d06fc38"
	I0127 12:15:17.611796 1104534 logs.go:123] Gathering logs for coredns [509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25] ...
	I0127 12:15:17.611825 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509d340d4dacfde08b300a919f3787c9c59261bf94f6ff99cbdfe93341133b25"
	I0127 12:15:17.657382 1104534 logs.go:123] Gathering logs for kube-scheduler [9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863] ...
	I0127 12:15:17.657409 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a31e4cb737d1c87d5361ee380073dcb21626c4ac496edc8a2635f77f41c4863"
	I0127 12:15:17.694504 1104534 logs.go:123] Gathering logs for container status ...
	I0127 12:15:17.694531 1104534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:15:17.738587 1104534 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:17.738669 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 12:15:17.738754 1104534 out.go:270] X Problems detected in kubelet:
	W0127 12:15:17.738768 1104534 out.go:270]   Jan 27 12:10:57 no-preload-835765 kubelet[659]: I0127 12:10:57.568961     659 status_manager.go:890] "Failed to get status for pod" podUID="d66094c8-5c1a-4aaa-a14c-27954c4c5434" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-qwgv5" err="pods \"kubernetes-dashboard-7779f9b69b-qwgv5\" is forbidden: User \"system:node:no-preload-835765\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-835765' and this object"
	I0127 12:15:17.738794 1104534 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:17.738803 1104534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:15:27.747342 1104534 system_pods.go:59] 9 kube-system pods found
	I0127 12:15:27.747465 1104534 system_pods.go:61] "coredns-668d6bf9bc-8wfgw" [42249467-e564-459a-ac3b-7c4a7fbf09ed] Running
	I0127 12:15:27.747488 1104534 system_pods.go:61] "etcd-no-preload-835765" [475893ea-62c8-4098-b525-7ad0f8c530cb] Running
	I0127 12:15:27.747532 1104534 system_pods.go:61] "kindnet-tmbh5" [ffbfdceb-9d13-493a-9644-7f1606fcd89e] Running
	I0127 12:15:27.747550 1104534 system_pods.go:61] "kube-apiserver-no-preload-835765" [182ab7ba-7d94-4590-89d4-b0ca3764d7c6] Running
	I0127 12:15:27.747570 1104534 system_pods.go:61] "kube-controller-manager-no-preload-835765" [3b430709-a5f7-498f-9d2f-f7ad5f00052f] Running
	I0127 12:15:27.747608 1104534 system_pods.go:61] "kube-proxy-6j77q" [0bd6d8c3-2a9d-4367-8763-62b461359b69] Running
	I0127 12:15:27.747631 1104534 system_pods.go:61] "kube-scheduler-no-preload-835765" [f608cbd8-acbb-4a70-9aa0-d252403e88c2] Running
	I0127 12:15:27.747653 1104534 system_pods.go:61] "metrics-server-f79f97bbb-kzh9d" [1a8a6212-6eaa-4ec1-9b1e-78d01a400f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:15:27.747689 1104534 system_pods.go:61] "storage-provisioner" [6fc8bfd2-49a1-4a6b-a8fe-32a472ce1753] Running
	I0127 12:15:27.747738 1104534 system_pods.go:74] duration metric: took 11.745114633s to wait for pod list to return data ...
	I0127 12:15:27.747771 1104534 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:15:27.750820 1104534 default_sa.go:45] found service account: "default"
	I0127 12:15:27.750847 1104534 default_sa.go:55] duration metric: took 3.053497ms for default service account to be created ...
	I0127 12:15:27.750856 1104534 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:15:27.756921 1104534 system_pods.go:87] 9 kube-system pods found
	I0127 12:15:27.759968 1104534 system_pods.go:105] "coredns-668d6bf9bc-8wfgw" [42249467-e564-459a-ac3b-7c4a7fbf09ed] Running
	I0127 12:15:27.759993 1104534 system_pods.go:105] "etcd-no-preload-835765" [475893ea-62c8-4098-b525-7ad0f8c530cb] Running
	I0127 12:15:27.760000 1104534 system_pods.go:105] "kindnet-tmbh5" [ffbfdceb-9d13-493a-9644-7f1606fcd89e] Running
	I0127 12:15:27.760005 1104534 system_pods.go:105] "kube-apiserver-no-preload-835765" [182ab7ba-7d94-4590-89d4-b0ca3764d7c6] Running
	I0127 12:15:27.760013 1104534 system_pods.go:105] "kube-controller-manager-no-preload-835765" [3b430709-a5f7-498f-9d2f-f7ad5f00052f] Running
	I0127 12:15:27.760018 1104534 system_pods.go:105] "kube-proxy-6j77q" [0bd6d8c3-2a9d-4367-8763-62b461359b69] Running
	I0127 12:15:27.760022 1104534 system_pods.go:105] "kube-scheduler-no-preload-835765" [f608cbd8-acbb-4a70-9aa0-d252403e88c2] Running
	I0127 12:15:27.760031 1104534 system_pods.go:105] "metrics-server-f79f97bbb-kzh9d" [1a8a6212-6eaa-4ec1-9b1e-78d01a400f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:15:27.760040 1104534 system_pods.go:105] "storage-provisioner" [6fc8bfd2-49a1-4a6b-a8fe-32a472ce1753] Running
	I0127 12:15:27.760049 1104534 system_pods.go:147] duration metric: took 9.186131ms to wait for k8s-apps to be running ...
	I0127 12:15:27.760059 1104534 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:15:27.760117 1104534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:15:27.771714 1104534 system_svc.go:56] duration metric: took 11.643712ms WaitForService to wait for kubelet
	I0127 12:15:27.771741 1104534 kubeadm.go:582] duration metric: took 4m41.455192192s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:15:27.771762 1104534 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:15:27.775024 1104534 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0127 12:15:27.775061 1104534 node_conditions.go:123] node cpu capacity is 2
	I0127 12:15:27.775073 1104534 node_conditions.go:105] duration metric: took 3.305528ms to run NodePressure ...
	I0127 12:15:27.775086 1104534 start.go:241] waiting for startup goroutines ...
	I0127 12:15:27.775095 1104534 start.go:246] waiting for cluster config update ...
	I0127 12:15:27.775106 1104534 start.go:255] writing updated cluster config ...
	I0127 12:15:27.775396 1104534 ssh_runner.go:195] Run: rm -f paused
	I0127 12:15:27.836313 1104534 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:15:27.839415 1104534 out.go:177] * Done! kubectl is now configured to use "no-preload-835765" cluster and "default" namespace by default
	I0127 12:15:25.440470 1099122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:15:25.452605 1099122 api_server.go:72] duration metric: took 5m53.082233775s to wait for apiserver process to appear ...
	I0127 12:15:25.452629 1099122 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:15:25.452667 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0127 12:15:25.452724 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 12:15:25.494090 1099122 cri.go:89] found id: "709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:25.494115 1099122 cri.go:89] found id: "f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:25.494121 1099122 cri.go:89] found id: ""
	I0127 12:15:25.494128 1099122 logs.go:282] 2 containers: [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e]
	I0127 12:15:25.494189 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.497645 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.500895 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0127 12:15:25.500968 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 12:15:25.542357 1099122 cri.go:89] found id: "8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:25.542442 1099122 cri.go:89] found id: "2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:25.542451 1099122 cri.go:89] found id: ""
	I0127 12:15:25.542460 1099122 logs.go:282] 2 containers: [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1]
	I0127 12:15:25.542525 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.548254 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.552119 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0127 12:15:25.552193 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 12:15:25.627449 1099122 cri.go:89] found id: "66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:25.627471 1099122 cri.go:89] found id: "006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:25.627476 1099122 cri.go:89] found id: ""
	I0127 12:15:25.627484 1099122 logs.go:282] 2 containers: [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863]
	I0127 12:15:25.627539 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.631955 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.635615 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0127 12:15:25.635695 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 12:15:25.686029 1099122 cri.go:89] found id: "15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:25.686052 1099122 cri.go:89] found id: "8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:25.686057 1099122 cri.go:89] found id: ""
	I0127 12:15:25.686063 1099122 logs.go:282] 2 containers: [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b]
	I0127 12:15:25.686121 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.691005 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.696361 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0127 12:15:25.696439 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 12:15:25.735204 1099122 cri.go:89] found id: "244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:25.735228 1099122 cri.go:89] found id: "69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:25.735233 1099122 cri.go:89] found id: ""
	I0127 12:15:25.735246 1099122 logs.go:282] 2 containers: [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d]
	I0127 12:15:25.735318 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.738739 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.742012 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 12:15:25.742080 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 12:15:25.783705 1099122 cri.go:89] found id: "35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:25.783728 1099122 cri.go:89] found id: "24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:25.783733 1099122 cri.go:89] found id: ""
	I0127 12:15:25.783740 1099122 logs.go:282] 2 containers: [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84]
	I0127 12:15:25.783798 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.787402 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.790806 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0127 12:15:25.790883 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 12:15:25.830016 1099122 cri.go:89] found id: "8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:25.830038 1099122 cri.go:89] found id: "92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:25.830043 1099122 cri.go:89] found id: ""
	I0127 12:15:25.830050 1099122 logs.go:282] 2 containers: [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f]
	I0127 12:15:25.830108 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.834070 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.837688 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0127 12:15:25.837767 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0127 12:15:25.880360 1099122 cri.go:89] found id: "1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:25.880381 1099122 cri.go:89] found id: "072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:25.880386 1099122 cri.go:89] found id: ""
	I0127 12:15:25.880394 1099122 logs.go:282] 2 containers: [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4]
	I0127 12:15:25.880459 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.884176 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.888084 1099122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 12:15:25.888159 1099122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 12:15:25.929160 1099122 cri.go:89] found id: "d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:25.929185 1099122 cri.go:89] found id: ""
	I0127 12:15:25.929193 1099122 logs.go:282] 1 containers: [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6]
	I0127 12:15:25.929257 1099122 ssh_runner.go:195] Run: which crictl
	I0127 12:15:25.934269 1099122 logs.go:123] Gathering logs for kindnet [92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f] ...
	I0127 12:15:25.934297 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f"
	I0127 12:15:25.978515 1099122 logs.go:123] Gathering logs for kube-scheduler [8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b] ...
	I0127 12:15:25.978546 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b"
	I0127 12:15:26.023365 1099122 logs.go:123] Gathering logs for kube-proxy [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56] ...
	I0127 12:15:26.023397 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56"
	I0127 12:15:26.074845 1099122 logs.go:123] Gathering logs for kube-proxy [69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d] ...
	I0127 12:15:26.074873 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d"
	I0127 12:15:26.126098 1099122 logs.go:123] Gathering logs for container status ...
	I0127 12:15:26.126177 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:15:26.170530 1099122 logs.go:123] Gathering logs for dmesg ...
	I0127 12:15:26.170571 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:15:26.190451 1099122 logs.go:123] Gathering logs for coredns [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab] ...
	I0127 12:15:26.190479 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab"
	I0127 12:15:26.240411 1099122 logs.go:123] Gathering logs for storage-provisioner [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609] ...
	I0127 12:15:26.240439 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609"
	I0127 12:15:26.285457 1099122 logs.go:123] Gathering logs for kindnet [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026] ...
	I0127 12:15:26.285491 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026"
	I0127 12:15:26.332919 1099122 logs.go:123] Gathering logs for containerd ...
	I0127 12:15:26.332949 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0127 12:15:26.397666 1099122 logs.go:123] Gathering logs for kube-apiserver [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f] ...
	I0127 12:15:26.397707 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f"
	I0127 12:15:26.459604 1099122 logs.go:123] Gathering logs for etcd [2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1] ...
	I0127 12:15:26.459638 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1"
	I0127 12:15:26.511966 1099122 logs.go:123] Gathering logs for kube-scheduler [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47] ...
	I0127 12:15:26.512123 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47"
	I0127 12:15:26.553498 1099122 logs.go:123] Gathering logs for etcd [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9] ...
	I0127 12:15:26.553568 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9"
	I0127 12:15:26.619235 1099122 logs.go:123] Gathering logs for coredns [006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863] ...
	I0127 12:15:26.619267 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863"
	I0127 12:15:26.662492 1099122 logs.go:123] Gathering logs for kube-controller-manager [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d] ...
	I0127 12:15:26.662523 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d"
	I0127 12:15:26.717270 1099122 logs.go:123] Gathering logs for kube-controller-manager [24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84] ...
	I0127 12:15:26.717303 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84"
	I0127 12:15:26.789783 1099122 logs.go:123] Gathering logs for storage-provisioner [072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4] ...
	I0127 12:15:26.789827 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4"
	I0127 12:15:26.841274 1099122 logs.go:123] Gathering logs for kubelet ...
	I0127 12:15:26.841302 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0127 12:15:26.899918 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.065326     663 reflector.go:138] object-"kube-system"/"metrics-server-token-827qh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-827qh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900177 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.068245     663 reflector.go:138] object-"kube-system"/"kindnet-token-jxc27": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jxc27" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900398 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.072508     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-pzcvk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-pzcvk" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900685 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.092394     663 reflector.go:138] object-"kube-system"/"coredns-token-m2lsh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2lsh" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.900893 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.102861     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.901112 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.106409     663 reflector.go:138] object-"default"/"default-token-pmbfm": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pmbfm" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.901342 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135445     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bbnlz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bbnlz" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.901547 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:53 old-k8s-version-999803 kubelet[663]: E0127 12:09:53.135737     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-999803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-999803' and this object
	W0127 12:15:26.910804 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:55 old-k8s-version-999803 kubelet[663]: E0127 12:09:55.695476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.910996 1099122 logs.go:138] Found kubelet problem: Jan 27 12:09:56 old-k8s-version-999803 kubelet[663]: E0127 12:09:56.587407     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.913762 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:08 old-k8s-version-999803 kubelet[663]: E0127 12:10:08.310539     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.915693 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:18 old-k8s-version-999803 kubelet[663]: E0127 12:10:18.685285     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.916152 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:19 old-k8s-version-999803 kubelet[663]: E0127 12:10:19.697364     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.916671 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:23 old-k8s-version-999803 kubelet[663]: E0127 12:10:23.289410     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.917136 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:27 old-k8s-version-999803 kubelet[663]: E0127 12:10:27.725508     663 pod_workers.go:191] Error syncing pod f73574be-9aec-4a33-ac88-97d900488a22 ("storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f73574be-9aec-4a33-ac88-97d900488a22)"
	W0127 12:15:26.917721 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:29 old-k8s-version-999803 kubelet[663]: E0127 12:10:29.743840     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.920519 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:35 old-k8s-version-999803 kubelet[663]: E0127 12:10:35.298365     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.920848 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:38 old-k8s-version-999803 kubelet[663]: E0127 12:10:38.666640     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.921170 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:46 old-k8s-version-999803 kubelet[663]: E0127 12:10:46.289557     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.921758 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:53 old-k8s-version-999803 kubelet[663]: E0127 12:10:53.813459     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.921942 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:57 old-k8s-version-999803 kubelet[663]: E0127 12:10:57.289244     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.922272 1099122 logs.go:138] Found kubelet problem: Jan 27 12:10:58 old-k8s-version-999803 kubelet[663]: E0127 12:10:58.662563     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.922455 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.289633     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.922880 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:10 old-k8s-version-999803 kubelet[663]: E0127 12:11:10.290664     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.925368 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:21 old-k8s-version-999803 kubelet[663]: E0127 12:11:21.295133     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.925709 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:25 old-k8s-version-999803 kubelet[663]: E0127 12:11:25.289132     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.925894 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:36 old-k8s-version-999803 kubelet[663]: E0127 12:11:36.291476     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.926506 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:40 old-k8s-version-999803 kubelet[663]: E0127 12:11:40.940519     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.926833 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:48 old-k8s-version-999803 kubelet[663]: E0127 12:11:48.662783     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.927017 1099122 logs.go:138] Found kubelet problem: Jan 27 12:11:50 old-k8s-version-999803 kubelet[663]: E0127 12:11:50.289726     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.927348 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.288851     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.927543 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:03 old-k8s-version-999803 kubelet[663]: E0127 12:12:03.289979     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.927729 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:14 old-k8s-version-999803 kubelet[663]: E0127 12:12:14.295027     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.928059 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:18 old-k8s-version-999803 kubelet[663]: E0127 12:12:18.288843     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.928244 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:26 old-k8s-version-999803 kubelet[663]: E0127 12:12:26.289650     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.928570 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:30 old-k8s-version-999803 kubelet[663]: E0127 12:12:30.289329     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.928753 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:37 old-k8s-version-999803 kubelet[663]: E0127 12:12:37.289235     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.929138 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:44 old-k8s-version-999803 kubelet[663]: E0127 12:12:44.288931     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.931588 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:50 old-k8s-version-999803 kubelet[663]: E0127 12:12:50.299810     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0127 12:15:26.931921 1099122 logs.go:138] Found kubelet problem: Jan 27 12:12:57 old-k8s-version-999803 kubelet[663]: E0127 12:12:57.288813     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.932106 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:03 old-k8s-version-999803 kubelet[663]: E0127 12:13:03.289391     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.932704 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:11 old-k8s-version-999803 kubelet[663]: E0127 12:13:11.199586     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.932890 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:14 old-k8s-version-999803 kubelet[663]: E0127 12:13:14.289503     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.933222 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:18 old-k8s-version-999803 kubelet[663]: E0127 12:13:18.662572     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.933406 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:29 old-k8s-version-999803 kubelet[663]: E0127 12:13:29.289301     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.933735 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:31 old-k8s-version-999803 kubelet[663]: E0127 12:13:31.288795     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.933921 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.289460     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.934248 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:42 old-k8s-version-999803 kubelet[663]: E0127 12:13:42.290963     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.934432 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:53 old-k8s-version-999803 kubelet[663]: E0127 12:13:53.289152     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.934759 1099122 logs.go:138] Found kubelet problem: Jan 27 12:13:57 old-k8s-version-999803 kubelet[663]: E0127 12:13:57.288829     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.934942 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:07 old-k8s-version-999803 kubelet[663]: E0127 12:14:07.289177     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.935267 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:12 old-k8s-version-999803 kubelet[663]: E0127 12:14:12.289332     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.935452 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:21 old-k8s-version-999803 kubelet[663]: E0127 12:14:21.289169     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.935784 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:26 old-k8s-version-999803 kubelet[663]: E0127 12:14:26.288817     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.935967 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:32 old-k8s-version-999803 kubelet[663]: E0127 12:14:32.289382     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.936292 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:37 old-k8s-version-999803 kubelet[663]: E0127 12:14:37.288718     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.936477 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.936802 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.936988 1099122 logs.go:138] Found kubelet problem: Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.937320 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.937506 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:26.937836 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:16 old-k8s-version-999803 kubelet[663]: E0127 12:15:16.289567     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:26.938019 1099122 logs.go:138] Found kubelet problem: Jan 27 12:15:24 old-k8s-version-999803 kubelet[663]: E0127 12:15:24.289280     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:26.938029 1099122 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:15:26.938044 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:15:27.111830 1099122 logs.go:123] Gathering logs for kube-apiserver [f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e] ...
	I0127 12:15:27.111864 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e"
	I0127 12:15:27.165850 1099122 logs.go:123] Gathering logs for kubernetes-dashboard [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6] ...
	I0127 12:15:27.165886 1099122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6"
	I0127 12:15:27.211532 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:27.211556 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 12:15:27.211643 1099122 out.go:270] X Problems detected in kubelet:
	W0127 12:15:27.211657 1099122 out.go:270]   Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:27.211679 1099122 out.go:270]   Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:27.211711 1099122 out.go:270]   Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0127 12:15:27.211720 1099122 out.go:270]   Jan 27 12:15:16 old-k8s-version-999803 kubelet[663]: E0127 12:15:16.289567     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	W0127 12:15:27.211727 1099122 out.go:270]   Jan 27 12:15:24 old-k8s-version-999803 kubelet[663]: E0127 12:15:24.289280     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0127 12:15:27.211737 1099122 out.go:358] Setting ErrFile to fd 2...
	I0127 12:15:27.211744 1099122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:15:37.213669 1099122 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0127 12:15:37.226947 1099122 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0127 12:15:37.229972 1099122 out.go:201] 
	W0127 12:15:37.232459 1099122 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0127 12:15:37.232495 1099122 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0127 12:15:37.232520 1099122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0127 12:15:37.232529 1099122 out.go:270] * 
	W0127 12:15:37.233525 1099122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 12:15:37.237108 1099122 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	40a6145bb133e       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   2b138d8b8e124       dashboard-metrics-scraper-8d5bb5db8-j2wzf
	1ff19c9b7a633       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   5e923d1d0e532       storage-provisioner
	d60a3e1bf1944       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   91c673f7cae7e       kubernetes-dashboard-cd95d586-rjt7d
	244500be1f9c8       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   eb5661c2b8b94       kube-proxy-nt2l9
	e089ef56a1bbf       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   76fb1a81ce155       busybox
	072df961772d8       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   5e923d1d0e532       storage-provisioner
	66bd9052021d0       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   e0dcf3e01f0f4       coredns-74ff55c5b-8pc5m
	8eb127c0b3e7c       2be0bcf609c65       5 minutes ago       Running             kindnet-cni                 1                   265153f6608b1       kindnet-wxrcg
	709fbfcd9d83c       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   2e9bac101deb8       kube-apiserver-old-k8s-version-999803
	8503304f4f6bb       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   0e313e6b8d512       etcd-old-k8s-version-999803
	35981ab98de2c       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   bc94ad27d5a90       kube-controller-manager-old-k8s-version-999803
	15e7801bd3deb       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   c9f1411109218       kube-scheduler-old-k8s-version-999803
	a83e94f3f97f3       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   367fc2acf537a       busybox
	006c6bbf3f760       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   ea6e3f9f35fbc       coredns-74ff55c5b-8pc5m
	92c639516afeb       2be0bcf609c65       8 minutes ago       Exited              kindnet-cni                 0                   32ec40ccbb455       kindnet-wxrcg
	69ff2212fa934       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   4954b0a0eaa72       kube-proxy-nt2l9
	f80be7361c799       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   f03db3b4a9126       kube-apiserver-old-k8s-version-999803
	24ee4e7a4be2f       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   7336995587cc5       kube-controller-manager-old-k8s-version-999803
	8a26e44a90d82       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   34af66dcd10ec       kube-scheduler-old-k8s-version-999803
	2fce96f610992       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   856d0c31b309b       etcd-old-k8s-version-999803
	
	
	==> containerd <==
	Jan 27 12:11:40 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:11:40.403957721Z" level=info msg="received exit event container_id:\"a0228c5a9654992ffae9e34e87744d06e6d850223bd15ca6cd7c2efee6edb268\" id:\"a0228c5a9654992ffae9e34e87744d06e6d850223bd15ca6cd7c2efee6edb268\" pid:2980 exit_status:255 exited_at:{seconds:1737979900 nanos:402001912}"
	Jan 27 12:11:40 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:11:40.404209629Z" level=info msg="StartContainer for \"a0228c5a9654992ffae9e34e87744d06e6d850223bd15ca6cd7c2efee6edb268\" returns successfully"
	Jan 27 12:11:40 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:11:40.441723446Z" level=info msg="shim disconnected" id=a0228c5a9654992ffae9e34e87744d06e6d850223bd15ca6cd7c2efee6edb268 namespace=k8s.io
	Jan 27 12:11:40 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:11:40.441958329Z" level=warning msg="cleaning up after shim disconnected" id=a0228c5a9654992ffae9e34e87744d06e6d850223bd15ca6cd7c2efee6edb268 namespace=k8s.io
	Jan 27 12:11:40 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:11:40.442121943Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:11:40 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:11:40.955691546Z" level=info msg="RemoveContainer for \"926cfd6317dda80a16c8945f6d9903e72f1423f2c68d3f23fdf7212b1118eff8\""
	Jan 27 12:11:40 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:11:40.959927325Z" level=info msg="RemoveContainer for \"926cfd6317dda80a16c8945f6d9903e72f1423f2c68d3f23fdf7212b1118eff8\" returns successfully"
	Jan 27 12:12:50 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:12:50.292025127Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:12:50 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:12:50.297483850Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jan 27 12:12:50 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:12:50.299379222Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jan 27 12:12:50 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:12:50.299429731Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.291932350Z" level=info msg="CreateContainer within sandbox \"2b138d8b8e124838adc3a6301d0d96cc66ffe56272c7314a8159fcc6b934e24a\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.309948135Z" level=info msg="CreateContainer within sandbox \"2b138d8b8e124838adc3a6301d0d96cc66ffe56272c7314a8159fcc6b934e24a\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c\""
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.310742495Z" level=info msg="StartContainer for \"40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c\""
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.384624567Z" level=info msg="StartContainer for \"40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c\" returns successfully"
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.385981736Z" level=info msg="received exit event container_id:\"40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c\" id:\"40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c\" pid:3242 exit_status:255 exited_at:{seconds:1737979990 nanos:385740987}"
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.414485200Z" level=info msg="shim disconnected" id=40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c namespace=k8s.io
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.415427405Z" level=warning msg="cleaning up after shim disconnected" id=40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c namespace=k8s.io
	Jan 27 12:13:10 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:10.415525076Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 27 12:13:11 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:11.201533294Z" level=info msg="RemoveContainer for \"a0228c5a9654992ffae9e34e87744d06e6d850223bd15ca6cd7c2efee6edb268\""
	Jan 27 12:13:11 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:13:11.209160998Z" level=info msg="RemoveContainer for \"a0228c5a9654992ffae9e34e87744d06e6d850223bd15ca6cd7c2efee6edb268\" returns successfully"
	Jan 27 12:15:37 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:15:37.294408088Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:15:37 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:15:37.307708982Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jan 27 12:15:37 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:15:37.309800722Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jan 27 12:15:37 old-k8s-version-999803 containerd[571]: time="2025-01-27T12:15:37.309822760Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [006c6bbf3f760aff2a8219083d9d31a73f2b60e843733fa6dcc1b9f4cd093863] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37642 - 41631 "HINFO IN 5616843203759752398.2136082076748601635. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03179285s
	
	
	==> coredns [66bd9052021d00f07447d0209c88baaf69ca242ff38e150f4b27b7dc2fb0f8ab] <==
	I0127 12:10:26.047581       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 12:09:56.046939567 +0000 UTC m=+0.037396583) (total time: 30.000515314s):
	Trace[2019727887]: [30.000515314s] [30.000515314s] END
	E0127 12:10:26.048190       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:10:26.055336       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 12:09:56.047498733 +0000 UTC m=+0.037955749) (total time: 30.007809178s):
	Trace[939984059]: [30.007809178s] [30.007809178s] END
	E0127 12:10:26.055594       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:10:26.055636       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-01-27 12:09:56.055250325 +0000 UTC m=+0.045707341) (total time: 30.000367642s):
	Trace[911902081]: [30.000367642s] [30.000367642s] END
	E0127 12:10:26.055771       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35695 - 6719 "HINFO IN 4182568448720788649.3492565323918288497. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023927941s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-999803
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-999803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=old-k8s-version-999803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_07_19_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:07:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-999803
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:15:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:10:43 +0000   Mon, 27 Jan 2025 12:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:10:43 +0000   Mon, 27 Jan 2025 12:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:10:43 +0000   Mon, 27 Jan 2025 12:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:10:43 +0000   Mon, 27 Jan 2025 12:07:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-999803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd860e1a1bd34c18b89313fc3e5f5650
	  System UUID:                885e0fc2-a16e-4a09-a4af-833dab2b5e11
	  Boot ID:                    9a2b5a8b-82ce-43cf-92bd-6297263d30a0
	  Kernel Version:             5.15.0-1075-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 coredns-74ff55c5b-8pc5m                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m3s
	  kube-system                 etcd-old-k8s-version-999803                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m11s
	  kube-system                 kindnet-wxrcg                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m3s
	  kube-system                 kube-apiserver-old-k8s-version-999803             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-controller-manager-old-k8s-version-999803    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-proxy-nt2l9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-scheduler-old-k8s-version-999803             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 metrics-server-9975d5f86-qzhdd                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m28s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-j2wzf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-rjt7d               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m30s (x5 over 8m30s)  kubelet     Node old-k8s-version-999803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x5 over 8m30s)  kubelet     Node old-k8s-version-999803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x4 over 8m30s)  kubelet     Node old-k8s-version-999803 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m11s                  kubelet     Node old-k8s-version-999803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m11s                  kubelet     Node old-k8s-version-999803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m11s                  kubelet     Node old-k8s-version-999803 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m3s                   kubelet     Node old-k8s-version-999803 status is now: NodeReady
	  Normal  Starting                 8m2s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-999803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-999803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m58s)  kubelet     Node old-k8s-version-999803 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m41s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [2fce96f610992c05bada065ffdfd6882c70eac2e5f13eedecd13f482a579e4c1] <==
	2025-01-27 12:07:09.455172 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	raft2025/01/27 12:07:09 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2025/01/27 12:07:09 INFO: ea7e25599daad906 became candidate at term 2
	raft2025/01/27 12:07:09 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2025/01/27 12:07:09 INFO: ea7e25599daad906 became leader at term 2
	raft2025/01/27 12:07:09 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2025-01-27 12:07:09.941902 I | etcdserver: published {Name:old-k8s-version-999803 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2025-01-27 12:07:09.942232 I | etcdserver: setting up the initial cluster version to 3.4
	2025-01-27 12:07:09.942415 I | embed: ready to serve client requests
	2025-01-27 12:07:09.943098 N | etcdserver/membership: set the initial cluster version to 3.4
	2025-01-27 12:07:09.943288 I | etcdserver/api: enabled capabilities for version 3.4
	2025-01-27 12:07:09.943370 I | embed: ready to serve client requests
	2025-01-27 12:07:09.944785 I | embed: serving client requests on 127.0.0.1:2379
	2025-01-27 12:07:09.957919 I | embed: serving client requests on 192.168.76.2:2379
	2025-01-27 12:07:30.494137 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:07:39.487283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:07:49.487295 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:07:59.487221 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:08:09.487475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:08:19.487241 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:08:29.487605 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:08:39.487329 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:08:49.487315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:08:59.487460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:09:09.488993 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [8503304f4f6bb440cdb9d56690c02a51bbf04721554ec4ea2e4f7beccf7609a9] <==
	2025-01-27 12:11:36.778428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:11:46.778419 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:11:56.778504 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:12:06.778397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:12:16.778653 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:12:26.778412 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:12:36.778420 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:12:46.778609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:12:56.778511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:13:06.778377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:13:16.778336 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:13:26.778531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:13:36.778437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:13:46.778518 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:13:56.778615 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:14:06.778481 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:14:16.778427 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:14:26.778409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:14:36.778348 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:14:46.779381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:14:56.778435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:15:06.778388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:15:16.778473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:15:26.778691 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-01-27 12:15:36.778436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:15:39 up  4:58,  0 users,  load average: 0.50, 1.73, 2.48
	Linux old-k8s-version-999803 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8eb127c0b3e7ca2dbdea667c52e0c0902b2083ba316fae162ca871ba65874026] <==
	I0127 12:13:36.625139       1 main.go:301] handling current node
	I0127 12:13:46.626124       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:13:46.626313       1 main.go:301] handling current node
	I0127 12:13:56.618193       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:13:56.618229       1 main.go:301] handling current node
	I0127 12:14:06.625171       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:14:06.625208       1 main.go:301] handling current node
	I0127 12:14:16.627196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:14:16.627231       1 main.go:301] handling current node
	I0127 12:14:26.626426       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:14:26.626461       1 main.go:301] handling current node
	I0127 12:14:36.624088       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:14:36.624130       1 main.go:301] handling current node
	I0127 12:14:46.626697       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:14:46.626731       1 main.go:301] handling current node
	I0127 12:14:56.618178       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:14:56.618213       1 main.go:301] handling current node
	I0127 12:15:06.623942       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:15:06.623979       1 main.go:301] handling current node
	I0127 12:15:16.627006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:15:16.627210       1 main.go:301] handling current node
	I0127 12:15:26.626449       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:15:26.626482       1 main.go:301] handling current node
	I0127 12:15:36.621468       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:15:36.621518       1 main.go:301] handling current node
	
	
	==> kindnet [92c639516afeb9428e0222d9b5f8645599fd3d251f65494aa754e34cf98cb81f] <==
	I0127 12:07:38.418601       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0127 12:07:38.730544       1 controller.go:361] Starting controller kube-network-policies
	I0127 12:07:38.730567       1 controller.go:365] Waiting for informer caches to sync
	I0127 12:07:38.730574       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0127 12:07:38.931045       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0127 12:07:38.931075       1 metrics.go:61] Registering metrics
	I0127 12:07:38.931292       1 controller.go:401] Syncing nftables rules
	I0127 12:07:48.737101       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:07:48.737213       1 main.go:301] handling current node
	I0127 12:07:58.729887       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:07:58.729923       1 main.go:301] handling current node
	I0127 12:08:08.737122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:08:08.737180       1 main.go:301] handling current node
	I0127 12:08:18.739364       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:08:18.739428       1 main.go:301] handling current node
	I0127 12:08:28.730600       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:08:28.730642       1 main.go:301] handling current node
	I0127 12:08:38.730021       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:08:38.730074       1 main.go:301] handling current node
	I0127 12:08:48.729881       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:08:48.729913       1 main.go:301] handling current node
	I0127 12:08:58.738778       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:08:58.738822       1 main.go:301] handling current node
	I0127 12:09:08.736188       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0127 12:09:08.736296       1 main.go:301] handling current node
	
	
	==> kube-apiserver [709fbfcd9d83c342cc7508e2e43c597f833305e8b77752082803c329644a7a5f] <==
	I0127 12:12:07.272778       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:12:07.272787       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 12:12:47.503780       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:12:47.503839       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:12:47.503849       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0127 12:12:56.837058       1 handler_proxy.go:102] no RequestInfo found in the context
	E0127 12:12:56.837134       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0127 12:12:56.837143       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:13:30.899942       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:13:30.900003       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:13:30.900033       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 12:14:13.778646       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:14:13.778689       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:14:13.778698       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 12:14:52.745323       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:14:52.745372       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:14:52.745381       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0127 12:14:54.341557       1 handler_proxy.go:102] no RequestInfo found in the context
	E0127 12:14:54.341629       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0127 12:14:54.341638       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 12:15:34.985146       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:15:34.985189       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:15:34.985198       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [f80be7361c799e67eb8078abf9c00c28743a0b13cb4d68bebcf185a9bbb0f91e] <==
	I0127 12:07:16.736324       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0127 12:07:16.736635       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 12:07:16.748292       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0127 12:07:16.759723       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0127 12:07:16.759909       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0127 12:07:17.252265       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:07:17.301514       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0127 12:07:17.406392       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0127 12:07:17.407703       1 controller.go:606] quota admission added evaluator for: endpoints
	I0127 12:07:17.411454       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:07:18.515887       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0127 12:07:19.020712       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0127 12:07:19.076100       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0127 12:07:27.495259       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:07:35.142215       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0127 12:07:35.271513       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0127 12:07:50.304386       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:07:50.304432       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:07:50.304442       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 12:08:22.204668       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:08:22.204716       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:08:22.204726       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0127 12:09:00.136238       1 client.go:360] parsed scheme: "passthrough"
	I0127 12:09:00.136300       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0127 12:09:00.136310       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [24ee4e7a4be2f4c68a515762ad5379c22343d7988e486be1f7e8598a1d0a3d84] <==
	I0127 12:07:35.273740       1 range_allocator.go:172] Starting range CIDR allocator
	I0127 12:07:35.273744       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0127 12:07:35.273748       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0127 12:07:35.276395       1 shared_informer.go:247] Caches are synced for GC 
	I0127 12:07:35.305138       1 shared_informer.go:247] Caches are synced for stateful set 
	I0127 12:07:35.305576       1 range_allocator.go:373] Set node old-k8s-version-999803 PodCIDR to [10.244.0.0/24]
	I0127 12:07:35.305764       1 shared_informer.go:247] Caches are synced for resource quota 
	I0127 12:07:35.305938       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nt2l9"
	I0127 12:07:35.313777       1 shared_informer.go:247] Caches are synced for taint 
	I0127 12:07:35.314067       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0127 12:07:35.314622       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-999803. Assuming now as a timestamp.
	I0127 12:07:35.314826       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0127 12:07:35.313979       1 shared_informer.go:247] Caches are synced for attach detach 
	I0127 12:07:35.314235       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0127 12:07:35.314546       1 event.go:291] "Event occurred" object="old-k8s-version-999803" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-999803 event: Registered Node old-k8s-version-999803 in Controller"
	I0127 12:07:35.320839       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wxrcg"
	I0127 12:07:35.464162       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0127 12:07:35.723800       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0127 12:07:35.723828       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0127 12:07:35.764352       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0127 12:07:36.800036       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0127 12:07:36.846581       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-g2bxq"
	I0127 12:07:40.315151       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0127 12:09:09.531270       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0127 12:09:09.874560       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [35981ab98de2c587686af3a8152486680cf438d67949177bf86e782a454cdc2d] <==
	W0127 12:11:18.674025       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:11:43.422893       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:11:50.324680       1 request.go:655] Throttling request took 1.048372026s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 12:11:51.176278       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:12:13.924853       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:12:22.826631       1 request.go:655] Throttling request took 1.048370913s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0127 12:12:23.678086       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:12:44.426942       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:12:55.378486       1 request.go:655] Throttling request took 1.042132309s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0127 12:12:56.179955       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:13:14.929217       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:13:27.830492       1 request.go:655] Throttling request took 1.048363919s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0127 12:13:28.682060       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:13:45.431012       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:14:00.332469       1 request.go:655] Throttling request took 1.048414217s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 12:14:01.183977       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:14:15.932865       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:14:32.833122       1 request.go:655] Throttling request took 1.048339251s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 12:14:33.684592       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:14:46.434822       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:15:05.335320       1 request.go:655] Throttling request took 1.048652726s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W0127 12:15:06.186502       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0127 12:15:16.936897       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0127 12:15:37.837094       1 request.go:655] Throttling request took 1.048277989s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0127 12:15:38.688684       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [244500be1f9c883cb6f931dd2caa39adf2ce7dd681e6b395ffdd428b7cd73e56] <==
	I0127 12:09:57.134580       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0127 12:09:57.134868       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0127 12:09:57.162834       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0127 12:09:57.163194       1 server_others.go:185] Using iptables Proxier.
	I0127 12:09:57.164018       1 server.go:650] Version: v1.20.0
	I0127 12:09:57.164914       1 config.go:315] Starting service config controller
	I0127 12:09:57.164981       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0127 12:09:57.165257       1 config.go:224] Starting endpoint slice config controller
	I0127 12:09:57.165301       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0127 12:09:57.265311       1 shared_informer.go:247] Caches are synced for service config 
	I0127 12:09:57.265412       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [69ff2212fa93444c9e96f052d22f3fcf7b914aeed28521b86cc27744fa7cb63d] <==
	I0127 12:07:36.490369       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0127 12:07:36.490455       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0127 12:07:36.527640       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0127 12:07:36.527731       1 server_others.go:185] Using iptables Proxier.
	I0127 12:07:36.527944       1 server.go:650] Version: v1.20.0
	I0127 12:07:36.528448       1 config.go:315] Starting service config controller
	I0127 12:07:36.528457       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0127 12:07:36.530615       1 config.go:224] Starting endpoint slice config controller
	I0127 12:07:36.530627       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0127 12:07:36.628592       1 shared_informer.go:247] Caches are synced for service config 
	I0127 12:07:36.632473       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [15e7801bd3deb080be178ee5dca3e9054b157598660a5143e5f6027505f59e47] <==
	I0127 12:09:46.791692       1 serving.go:331] Generated self-signed cert in-memory
	W0127 12:09:52.864812       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 12:09:52.867584       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 12:09:52.867792       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 12:09:52.867888       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:09:53.330453       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0127 12:09:53.330532       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:09:53.330538       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:09:53.330551       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0127 12:09:53.471075       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [8a26e44a90d82cc1704ea53da393d1619a1aa80921953105454f66345703881b] <==
	W0127 12:07:15.878573       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 12:07:15.878687       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 12:07:15.878716       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 12:07:15.878756       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:07:15.945998       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0127 12:07:15.955449       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0127 12:07:15.955732       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:07:15.955831       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 12:07:15.981813       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:07:15.989495       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:07:15.989850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 12:07:15.990070       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:07:15.990281       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:07:15.990478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 12:07:15.990672       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:07:15.990923       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 12:07:15.991099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:07:15.991285       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:07:15.991455       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:07:15.991641       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 12:07:16.841992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 12:07:16.971713       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:07:17.034727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:07:17.086926       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:07:17.556037       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jan 27 12:14:07 old-k8s-version-999803 kubelet[663]: E0127 12:14:07.289177     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:14:12 old-k8s-version-999803 kubelet[663]: I0127 12:14:12.288480     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c
	Jan 27 12:14:12 old-k8s-version-999803 kubelet[663]: E0127 12:14:12.289332     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	Jan 27 12:14:21 old-k8s-version-999803 kubelet[663]: E0127 12:14:21.289169     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:14:26 old-k8s-version-999803 kubelet[663]: I0127 12:14:26.288415     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c
	Jan 27 12:14:26 old-k8s-version-999803 kubelet[663]: E0127 12:14:26.288817     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	Jan 27 12:14:32 old-k8s-version-999803 kubelet[663]: E0127 12:14:32.289382     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:14:37 old-k8s-version-999803 kubelet[663]: I0127 12:14:37.288373     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c
	Jan 27 12:14:37 old-k8s-version-999803 kubelet[663]: E0127 12:14:37.288718     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	Jan 27 12:14:47 old-k8s-version-999803 kubelet[663]: E0127 12:14:47.289211     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: I0127 12:14:50.288800     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c
	Jan 27 12:14:50 old-k8s-version-999803 kubelet[663]: E0127 12:14:50.289608     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	Jan 27 12:14:58 old-k8s-version-999803 kubelet[663]: E0127 12:14:58.289567     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: I0127 12:15:01.288522     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c
	Jan 27 12:15:01 old-k8s-version-999803 kubelet[663]: E0127 12:15:01.288887     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	Jan 27 12:15:12 old-k8s-version-999803 kubelet[663]: E0127 12:15:12.289984     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:15:16 old-k8s-version-999803 kubelet[663]: I0127 12:15:16.289291     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c
	Jan 27 12:15:16 old-k8s-version-999803 kubelet[663]: E0127 12:15:16.289567     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	Jan 27 12:15:24 old-k8s-version-999803 kubelet[663]: E0127 12:15:24.289280     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 27 12:15:31 old-k8s-version-999803 kubelet[663]: I0127 12:15:31.288441     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40a6145bb133e4fd91dabd154367b0d0e4a5c11fbad40fb3fb220613822fb05c
	Jan 27 12:15:31 old-k8s-version-999803 kubelet[663]: E0127 12:15:31.288809     663 pod_workers.go:191] Error syncing pod 899b8438-3079-49ad-86e7-4d860b77226a ("dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-j2wzf_kubernetes-dashboard(899b8438-3079-49ad-86e7-4d860b77226a)"
	Jan 27 12:15:37 old-k8s-version-999803 kubelet[663]: E0127 12:15:37.310111     663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jan 27 12:15:37 old-k8s-version-999803 kubelet[663]: E0127 12:15:37.310172     663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jan 27 12:15:37 old-k8s-version-999803 kubelet[663]: E0127 12:15:37.310670     663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-827qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2
f-4782-4243-88b6-16e0d2b7175f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jan 27 12:15:37 old-k8s-version-999803 kubelet[663]: E0127 12:15:37.310716     663 pod_workers.go:191] Error syncing pod 01eb7c2f-4782-4243-88b6-16e0d2b7175f ("metrics-server-9975d5f86-qzhdd_kube-system(01eb7c2f-4782-4243-88b6-16e0d2b7175f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [d60a3e1bf1944552103fad77eec238f749ba5acad1a8659d1c64c14613b93cc6] <==
	2025/01/27 12:10:21 Using namespace: kubernetes-dashboard
	2025/01/27 12:10:21 Using in-cluster config to connect to apiserver
	2025/01/27 12:10:21 Using secret token for csrf signing
	2025/01/27 12:10:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/27 12:10:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/27 12:10:21 Successful initial request to the apiserver, version: v1.20.0
	2025/01/27 12:10:21 Generating JWE encryption key
	2025/01/27 12:10:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/27 12:10:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/27 12:10:21 Initializing JWE encryption key from synchronized object
	2025/01/27 12:10:21 Creating in-cluster Sidecar client
	2025/01/27 12:10:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:10:21 Serving insecurely on HTTP port: 9090
	2025/01/27 12:10:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:11:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:11:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:12:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:12:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:13:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:13:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:14:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:14:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:15:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 12:10:21 Starting overwatch
	
	
	==> storage-provisioner [072df961772d8282ba54c52a0302c98c79f57d6e3f2dd679b1852199b2455bb4] <==
	I0127 12:09:56.184951       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0127 12:10:26.188438       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [1ff19c9b7a63387c9efa0456868dae9c8e181163e1563d382675a4945e811609] <==
	I0127 12:10:43.444818       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 12:10:43.470074       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 12:10:43.470359       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 12:11:00.944474       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 12:11:00.947513       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-999803_d448eea1-8bf9-4e1f-b48b-510603e45bd1!
	I0127 12:11:00.947729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3144bf25-28ec-44c1-96c6-b5f616047066", APIVersion:"v1", ResourceVersion:"849", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-999803_d448eea1-8bf9-4e1f-b48b-510603e45bd1 became leader
	I0127 12:11:01.048544       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-999803_d448eea1-8bf9-4e1f-b48b-510603e45bd1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-999803 -n old-k8s-version-999803
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-999803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-qzhdd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-999803 describe pod metrics-server-9975d5f86-qzhdd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-999803 describe pod metrics-server-9975d5f86-qzhdd: exit status 1 (117.280602ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-qzhdd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-999803 describe pod metrics-server-9975d5f86-qzhdd: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.68s)

                                                
                                    

Test pass (299/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.79
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.1/json-events 7.29
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.09
18 TestDownloadOnly/v1.32.1/DeleteAll 0.22
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 214.62
29 TestAddons/serial/Volcano 44.15
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 8.95
35 TestAddons/parallel/Registry 16.28
36 TestAddons/parallel/Ingress 20.62
37 TestAddons/parallel/InspektorGadget 11.74
38 TestAddons/parallel/MetricsServer 6.2
40 TestAddons/parallel/CSI 36.11
41 TestAddons/parallel/Headlamp 18.91
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 53.7
44 TestAddons/parallel/NvidiaDevicePlugin 6.57
45 TestAddons/parallel/Yakd 11.83
47 TestAddons/StoppedEnableDisable 12.24
48 TestCertOptions 38.11
49 TestCertExpiration 226.19
51 TestForceSystemdFlag 39.92
52 TestForceSystemdEnv 44.26
53 TestDockerEnvContainerd 51.38
58 TestErrorSpam/setup 32.46
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.12
61 TestErrorSpam/pause 1.79
62 TestErrorSpam/unpause 1.87
63 TestErrorSpam/stop 1.48
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.35
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.63
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.09
75 TestFunctional/serial/CacheCmd/cache/add_local 1.26
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.01
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 46.5
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.71
86 TestFunctional/serial/LogsFileCmd 1.75
87 TestFunctional/serial/InvalidService 4.98
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 13.4
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.22
97 TestFunctional/parallel/ServiceCmdConnect 9.66
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 26.65
101 TestFunctional/parallel/SSHCmd 0.77
102 TestFunctional/parallel/CpCmd 1.99
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 2.08
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
113 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.39
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
127 TestFunctional/parallel/ServiceCmd/List 0.7
128 TestFunctional/parallel/ProfileCmd/profile_list 0.56
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
131 TestFunctional/parallel/MountCmd/any-port 7.35
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
133 TestFunctional/parallel/ServiceCmd/Format 0.51
134 TestFunctional/parallel/ServiceCmd/URL 0.51
135 TestFunctional/parallel/MountCmd/specific-port 1.31
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.41
137 TestFunctional/parallel/Version/short 0.11
138 TestFunctional/parallel/Version/components 1.35
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
144 TestFunctional/parallel/ImageCommands/Setup 0.7
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 137.32
162 TestMultiControlPlane/serial/DeployApp 30.23
163 TestMultiControlPlane/serial/PingHostFromPods 1.68
164 TestMultiControlPlane/serial/AddWorkerNode 20.89
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.99
167 TestMultiControlPlane/serial/CopyFile 19.41
168 TestMultiControlPlane/serial/StopSecondaryNode 12.86
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
170 TestMultiControlPlane/serial/RestartSecondaryNode 19.43
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 146.13
173 TestMultiControlPlane/serial/DeleteSecondaryNode 10.75
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
175 TestMultiControlPlane/serial/StopCluster 35.85
176 TestMultiControlPlane/serial/RestartCluster 73.97
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
178 TestMultiControlPlane/serial/AddSecondaryNode 40.33
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
183 TestJSONOutput/start/Command 86.01
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.73
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.74
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
208 TestKicCustomNetwork/create_custom_network 41.5
209 TestKicCustomNetwork/use_default_bridge_network 33.98
210 TestKicExistingNetwork 35.29
211 TestKicCustomSubnet 33.32
212 TestKicStaticIP 32.89
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 66.87
217 TestMountStart/serial/StartWithMountFirst 8.95
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 8.54
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.61
222 TestMountStart/serial/VerifyMountPostDelete 0.28
223 TestMountStart/serial/Stop 1.22
224 TestMountStart/serial/RestartStopped 7.18
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 64.68
229 TestMultiNode/serial/DeployApp2Nodes 20.97
230 TestMultiNode/serial/PingHostFrom2Pods 1.02
231 TestMultiNode/serial/AddNode 18.49
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.67
234 TestMultiNode/serial/CopyFile 10.14
235 TestMultiNode/serial/StopNode 2.24
236 TestMultiNode/serial/StartAfterStop 9.49
237 TestMultiNode/serial/RestartKeepsNodes 84.16
238 TestMultiNode/serial/DeleteNode 5.29
239 TestMultiNode/serial/StopMultiNode 23.89
240 TestMultiNode/serial/RestartMultiNode 54.18
241 TestMultiNode/serial/ValidateNameConflict 33.27
246 TestPreload 123.39
251 TestInsufficientStorage 10.24
252 TestRunningBinaryUpgrade 99.27
254 TestKubernetesUpgrade 188.51
255 TestMissingContainerUpgrade 183.58
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 37.3
259 TestNoKubernetes/serial/StartWithStopK8s 21.31
260 TestNoKubernetes/serial/Start 5.54
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
262 TestNoKubernetes/serial/ProfileList 0.97
263 TestNoKubernetes/serial/Stop 1.21
264 TestNoKubernetes/serial/StartNoArgs 6.8
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
266 TestStoppedBinaryUpgrade/Setup 0.59
267 TestStoppedBinaryUpgrade/Upgrade 106.26
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
277 TestPause/serial/Start 92.07
278 TestPause/serial/SecondStartNoReconfiguration 8.29
279 TestPause/serial/Pause 1.03
280 TestPause/serial/VerifyStatus 0.39
281 TestPause/serial/Unpause 0.83
282 TestPause/serial/PauseAgain 1.13
283 TestPause/serial/DeletePaused 3.27
284 TestPause/serial/VerifyDeletedResources 0.84
292 TestNetworkPlugins/group/false 5.86
297 TestStartStop/group/old-k8s-version/serial/FirstStart 144.59
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.81
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.66
300 TestStartStop/group/old-k8s-version/serial/Stop 12.78
302 TestStartStop/group/no-preload/serial/FirstStart 63.54
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
305 TestStartStop/group/no-preload/serial/DeployApp 9.42
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
307 TestStartStop/group/no-preload/serial/Stop 12.11
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/no-preload/serial/SecondStart 289.49
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
313 TestStartStop/group/no-preload/serial/Pause 3.51
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/embed-certs/serial/FirstStart 99.93
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
319 TestStartStop/group/old-k8s-version/serial/Pause 3.45
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.67
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.53
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 279.16
327 TestStartStop/group/embed-certs/serial/DeployApp 9.46
328 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.78
329 TestStartStop/group/embed-certs/serial/Stop 12.64
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
331 TestStartStop/group/embed-certs/serial/SecondStart 266.46
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.37
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/newest-cni/serial/FirstStart 40.53
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
341 TestStartStop/group/embed-certs/serial/Pause 4.24
342 TestNetworkPlugins/group/auto/Start 95.27
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.59
345 TestStartStop/group/newest-cni/serial/Stop 1.36
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
347 TestStartStop/group/newest-cni/serial/SecondStart 19.39
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
351 TestStartStop/group/newest-cni/serial/Pause 3.03
352 TestNetworkPlugins/group/kindnet/Start 54.52
353 TestNetworkPlugins/group/auto/KubeletFlags 0.35
354 TestNetworkPlugins/group/auto/NetCatPod 10.3
355 TestNetworkPlugins/group/auto/DNS 0.23
356 TestNetworkPlugins/group/auto/Localhost 0.15
357 TestNetworkPlugins/group/auto/HairPin 0.15
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
360 TestNetworkPlugins/group/kindnet/NetCatPod 9.41
361 TestNetworkPlugins/group/kindnet/DNS 0.27
362 TestNetworkPlugins/group/kindnet/Localhost 0.25
363 TestNetworkPlugins/group/kindnet/HairPin 0.19
364 TestNetworkPlugins/group/calico/Start 79.23
365 TestNetworkPlugins/group/custom-flannel/Start 57.09
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.33
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.3
370 TestNetworkPlugins/group/calico/NetCatPod 10.27
371 TestNetworkPlugins/group/custom-flannel/DNS 0.24
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
374 TestNetworkPlugins/group/calico/DNS 0.3
375 TestNetworkPlugins/group/calico/Localhost 0.29
376 TestNetworkPlugins/group/calico/HairPin 0.3
377 TestNetworkPlugins/group/enable-default-cni/Start 78.63
378 TestNetworkPlugins/group/flannel/Start 59.86
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
381 TestNetworkPlugins/group/flannel/NetCatPod 10.3
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
384 TestNetworkPlugins/group/flannel/DNS 0.18
385 TestNetworkPlugins/group/flannel/Localhost 0.18
386 TestNetworkPlugins/group/flannel/HairPin 0.14
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
390 TestNetworkPlugins/group/bridge/Start 72.24
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
392 TestNetworkPlugins/group/bridge/NetCatPod 12.35
393 TestNetworkPlugins/group/bridge/DNS 0.17
394 TestNetworkPlugins/group/bridge/Localhost 0.15
395 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-888340 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-888340 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.786208524s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 11:23:14.416500  893715 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 11:23:14.416581  893715 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-888340
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-888340: exit status 85 (84.916169ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-888340 | jenkins | v1.35.0 | 27 Jan 25 11:23 UTC |          |
	|         | -p download-only-888340        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:23:06
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:23:06.678558  893720 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:23:06.678776  893720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:06.678802  893720 out.go:358] Setting ErrFile to fd 2...
	I0127 11:23:06.678820  893720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:06.679083  893720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	W0127 11:23:06.679278  893720 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20318-888339/.minikube/config/config.json: open /home/jenkins/minikube-integration/20318-888339/.minikube/config/config.json: no such file or directory
	I0127 11:23:06.679746  893720 out.go:352] Setting JSON to true
	I0127 11:23:06.680712  893720 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14732,"bootTime":1737962255,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 11:23:06.680810  893720 start.go:139] virtualization:  
	I0127 11:23:06.684724  893720 out.go:97] [download-only-888340] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	W0127 11:23:06.684927  893720 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 11:23:06.685001  893720 notify.go:220] Checking for updates...
	I0127 11:23:06.687604  893720 out.go:169] MINIKUBE_LOCATION=20318
	I0127 11:23:06.690197  893720 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:23:06.692787  893720 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 11:23:06.695314  893720 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 11:23:06.697865  893720 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 11:23:06.704395  893720 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:23:06.704683  893720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:23:06.729465  893720 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:23:06.729575  893720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:23:06.798112  893720 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:23:06.789316899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:23:06.798223  893720 docker.go:318] overlay module found
	I0127 11:23:06.801094  893720 out.go:97] Using the docker driver based on user configuration
	I0127 11:23:06.801119  893720 start.go:297] selected driver: docker
	I0127 11:23:06.801125  893720 start.go:901] validating driver "docker" against <nil>
	I0127 11:23:06.801238  893720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:23:06.853315  893720 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:23:06.845138673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:23:06.853523  893720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:23:06.853834  893720 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 11:23:06.854004  893720 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:23:06.856988  893720 out.go:169] Using Docker driver with root privileges
	I0127 11:23:06.859588  893720 cni.go:84] Creating CNI manager for ""
	I0127 11:23:06.859662  893720 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 11:23:06.859674  893720 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:23:06.859754  893720 start.go:340] cluster config:
	{Name:download-only-888340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-888340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:23:06.862484  893720 out.go:97] Starting "download-only-888340" primary control-plane node in "download-only-888340" cluster
	I0127 11:23:06.862517  893720 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 11:23:06.865206  893720 out.go:97] Pulling base image v0.0.46 ...
	I0127 11:23:06.865232  893720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 11:23:06.865339  893720 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 11:23:06.881300  893720 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 11:23:06.881487  893720 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 11:23:06.881584  893720 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 11:23:06.926486  893720 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 11:23:06.926510  893720 cache.go:56] Caching tarball of preloaded images
	I0127 11:23:06.926678  893720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 11:23:06.929658  893720 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 11:23:06.929693  893720 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0127 11:23:07.018283  893720 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0127 11:23:11.509380  893720 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	
	
	* The control-plane node download-only-888340 host does not exist
	  To start a cluster, run: "minikube start -p download-only-888340"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-888340
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (7.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-601233 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-601233 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.291164128s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (7.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 11:23:22.161146  893715 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 11:23:22.161183  893715 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-601233
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-601233: exit status 85 (85.089168ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-888340 | jenkins | v1.35.0 | 27 Jan 25 11:23 UTC |                     |
	|         | -p download-only-888340        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 11:23 UTC | 27 Jan 25 11:23 UTC |
	| delete  | -p download-only-888340        | download-only-888340 | jenkins | v1.35.0 | 27 Jan 25 11:23 UTC | 27 Jan 25 11:23 UTC |
	| start   | -o=json --download-only        | download-only-601233 | jenkins | v1.35.0 | 27 Jan 25 11:23 UTC |                     |
	|         | -p download-only-601233        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:23:14
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:23:14.921513  893919 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:23:14.921710  893919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:14.921737  893919 out.go:358] Setting ErrFile to fd 2...
	I0127 11:23:14.921758  893919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:23:14.922046  893919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:23:14.922550  893919 out.go:352] Setting JSON to true
	I0127 11:23:14.923492  893919 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":14740,"bootTime":1737962255,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 11:23:14.923588  893919 start.go:139] virtualization:  
	I0127 11:23:14.926957  893919 out.go:97] [download-only-601233] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:23:14.927201  893919 notify.go:220] Checking for updates...
	I0127 11:23:14.929849  893919 out.go:169] MINIKUBE_LOCATION=20318
	I0127 11:23:14.932740  893919 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:23:14.935432  893919 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 11:23:14.938130  893919 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 11:23:14.940923  893919 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0127 11:23:14.946161  893919 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 11:23:14.946444  893919 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:23:14.967595  893919 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:23:14.967712  893919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:23:15.036669  893919 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 11:23:15.026414508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:23:15.036789  893919 docker.go:318] overlay module found
	I0127 11:23:15.039674  893919 out.go:97] Using the docker driver based on user configuration
	I0127 11:23:15.039715  893919 start.go:297] selected driver: docker
	I0127 11:23:15.039723  893919 start.go:901] validating driver "docker" against <nil>
	I0127 11:23:15.039857  893919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:23:15.098578  893919 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-27 11:23:15.088490913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:23:15.098834  893919 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:23:15.099159  893919 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0127 11:23:15.099335  893919 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 11:23:15.102287  893919 out.go:169] Using Docker driver with root privileges
	I0127 11:23:15.105062  893919 cni.go:84] Creating CNI manager for ""
	I0127 11:23:15.105138  893919 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0127 11:23:15.105155  893919 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:23:15.105238  893919 start.go:340] cluster config:
	{Name:download-only-601233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-601233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:23:15.107984  893919 out.go:97] Starting "download-only-601233" primary control-plane node in "download-only-601233" cluster
	I0127 11:23:15.108025  893919 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0127 11:23:15.110866  893919 out.go:97] Pulling base image v0.0.46 ...
	I0127 11:23:15.110925  893919 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:23:15.111014  893919 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0127 11:23:15.127373  893919 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0127 11:23:15.127496  893919 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0127 11:23:15.127514  893919 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0127 11:23:15.127518  893919 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0127 11:23:15.127526  893919 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0127 11:23:15.176934  893919 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	I0127 11:23:15.176971  893919 cache.go:56] Caching tarball of preloaded images
	I0127 11:23:15.177780  893919 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0127 11:23:15.180690  893919 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 11:23:15.180713  893919 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 ...
	I0127 11:23:15.266308  893919 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:3dfa1a6dfbdb6fd11337c34d558e517e -> /home/jenkins/minikube-integration/20318-888339/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-601233 host does not exist
	  To start a cluster, run: "minikube start -p download-only-601233"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-601233
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 11:23:23.461197  893715 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-178454 --alsologtostderr --binary-mirror http://127.0.0.1:38951 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-178454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-178454
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-033618
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-033618: exit status 85 (75.021835ms)

                                                
                                                
-- stdout --
	* Profile "addons-033618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-033618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-033618
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-033618: exit status 85 (77.756357ms)

                                                
                                                
-- stdout --
	* Profile "addons-033618" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-033618"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (214.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-033618 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-033618 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m34.619978143s)
--- PASS: TestAddons/Setup (214.62s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.15s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 54.897363ms
addons_test.go:807: volcano-scheduler stabilized in 54.997274ms
addons_test.go:823: volcano-controller stabilized in 56.231729ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-ll5hv" [eefbfe82-9209-40a4-a72c-1d92c3c2b683] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003671877s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-z2n9c" [2cb26d78-3cda-49ae-b0cb-9849efc72591] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003748836s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-fbv85" [1195fae9-5fca-4cc1-b873-b2d21a46129c] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003603323s
addons_test.go:842: (dbg) Run:  kubectl --context addons-033618 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-033618 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-033618 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [fec6389d-85da-4014-ab7c-853d1e0b3f21] Pending
helpers_test.go:344: "test-job-nginx-0" [fec6389d-85da-4014-ab7c-853d1e0b3f21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [fec6389d-85da-4014-ab7c-853d1e0b3f21] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003151103s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable volcano --alsologtostderr -v=1: (11.481162091s)
--- PASS: TestAddons/serial/Volcano (44.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-033618 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-033618 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.95s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-033618 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-033618 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd50e05e-6b51-4289-bff4-05a10512edc1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd50e05e-6b51-4289-bff4-05a10512edc1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004218497s
addons_test.go:633: (dbg) Run:  kubectl --context addons-033618 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-033618 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-033618 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-033618 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.881584ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-vz54p" [35ad3c42-8086-4c54-bdb0-478a1c9c3f3c] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004030446s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xvqr7" [205712af-163a-4be8-88e6-2d8a58f996e3] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004487713s
addons_test.go:331: (dbg) Run:  kubectl --context addons-033618 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-033618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-033618 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.252460444s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 ip
2025/01/27 11:28:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.28s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-033618 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-033618 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-033618 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [937a9413-00c1-4e4e-b0ec-5edb877395f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [937a9413-00c1-4e4e-b0ec-5edb877395f2] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003676118s
I0127 11:29:30.979250  893715 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-033618 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable ingress-dns --alsologtostderr -v=1: (1.085472907s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable ingress --alsologtostderr -v=1: (7.825700523s)
--- PASS: TestAddons/parallel/Ingress (20.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tlj88" [10201fd4-2fed-4a04-ad58-2acd8c7a6a74] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004158094s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable inspektor-gadget --alsologtostderr -v=1: (5.731432241s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.397732ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-vlthk" [0a1cd4a6-fdc0-412b-9d2e-723e6c3923a3] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004674956s
addons_test.go:402: (dbg) Run:  kubectl --context addons-033618 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable metrics-server --alsologtostderr -v=1: (1.071307404s)
--- PASS: TestAddons/parallel/MetricsServer (6.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (36.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 11:28:41.031799  893715 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 11:28:41.038368  893715 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 11:28:41.038403  893715 kapi.go:107] duration metric: took 9.660328ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.672225ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-033618 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-033618 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ca5b2725-9986-4a07-b6d8-2443d737d837] Pending
helpers_test.go:344: "task-pv-pod" [ca5b2725-9986-4a07-b6d8-2443d737d837] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ca5b2725-9986-4a07-b6d8-2443d737d837] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003688223s
addons_test.go:511: (dbg) Run:  kubectl --context addons-033618 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-033618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-033618 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-033618 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-033618 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-033618 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-033618 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [365e14dd-b6e8-4d41-b846-b2fcb824f752] Pending
helpers_test.go:344: "task-pv-pod-restore" [365e14dd-b6e8-4d41-b846-b2fcb824f752] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [365e14dd-b6e8-4d41-b846-b2fcb824f752] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003786512s
addons_test.go:553: (dbg) Run:  kubectl --context addons-033618 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-033618 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-033618 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.873377155s)
--- PASS: TestAddons/parallel/CSI (36.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-033618 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-033618 --alsologtostderr -v=1: (1.073952781s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-d56vh" [8c33ba74-a3ee-41a5-9c0b-ec22fd6fdb1d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-d56vh" [8c33ba74-a3ee-41a5-9c0b-ec22fd6fdb1d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004562425s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable headlamp --alsologtostderr -v=1: (5.822815764s)
--- PASS: TestAddons/parallel/Headlamp (18.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-bb7ng" [1d7d08e1-4dc0-46a8-a3ad-8df127c43bc7] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003615242s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.7s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-033618 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-033618 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-033618 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eb4f3c3a-af3b-45c0-aa38-00082d795642] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eb4f3c3a-af3b-45c0-aa38-00082d795642] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eb4f3c3a-af3b-45c0-aa38-00082d795642] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.003953129s
addons_test.go:906: (dbg) Run:  kubectl --context addons-033618 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 ssh "cat /opt/local-path-provisioner/pvc-7f3ad3d4-2908-477a-beb7-1e519f185126_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-033618 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-033618 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (40.480678175s)
--- PASS: TestAddons/parallel/LocalPath (53.70s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pm8ns" [abbd3169-92ff-46a5-bc3b-2c617e638dcf] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003760741s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-75tpd" [6808f8c6-c9d7-4db6-81df-753ce0bc0339] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005574248s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-033618 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-033618 addons disable yakd --alsologtostderr -v=1: (5.82216307s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-033618
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-033618: (11.941005066s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-033618
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-033618
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-033618
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (38.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-429275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-429275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.415016854s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-429275 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-429275 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-429275 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-429275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-429275
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-429275: (2.024501672s)
--- PASS: TestCertOptions (38.11s)

                                                
                                    
x
+
TestCertExpiration (226.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-972837 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-972837 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.400107013s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-972837 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-972837 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.210661995s)
helpers_test.go:175: Cleaning up "cert-expiration-972837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-972837
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-972837: (2.581366606s)
--- PASS: TestCertExpiration (226.19s)

                                                
                                    
x
+
TestForceSystemdFlag (39.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-781923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-781923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.041217921s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-781923 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-781923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-781923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-781923: (2.360553978s)
--- PASS: TestForceSystemdFlag (39.92s)

                                                
                                    
x
+
TestForceSystemdEnv (44.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-947488 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-947488 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.274895497s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-947488 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-947488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-947488
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-947488: (2.638306166s)
--- PASS: TestForceSystemdEnv (44.26s)

                                                
                                    
x
+
TestDockerEnvContainerd (51.38s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-108890 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-108890 --driver=docker  --container-runtime=containerd: (30.649855873s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-108890"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TvTpmy0d7FPa/agent.914891" SSH_AGENT_PID="914892" DOCKER_HOST=ssh://docker@127.0.0.1:33567 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TvTpmy0d7FPa/agent.914891" SSH_AGENT_PID="914892" DOCKER_HOST=ssh://docker@127.0.0.1:33567 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TvTpmy0d7FPa/agent.914891" SSH_AGENT_PID="914892" DOCKER_HOST=ssh://docker@127.0.0.1:33567 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (6.195968917s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-TvTpmy0d7FPa/agent.914891" SSH_AGENT_PID="914892" DOCKER_HOST=ssh://docker@127.0.0.1:33567 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-108890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-108890
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-108890: (2.198980956s)
--- PASS: TestDockerEnvContainerd (51.38s)

                                                
                                    
x
+
TestErrorSpam/setup (32.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-989638 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-989638 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-989638 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-989638 --driver=docker  --container-runtime=containerd: (32.45556588s)
--- PASS: TestErrorSpam/setup (32.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 stop: (1.260121956s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-989638 --log_dir /tmp/nospam-989638 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20318-888339/.minikube/files/etc/test/nested/copy/893715/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451719 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0127 11:31:58.790506  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:58.796789  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:58.808091  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:58.829420  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:58.870732  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:58.952053  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:59.113345  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:31:59.434872  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:32:00.076899  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:32:01.358253  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:32:03.920165  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:32:09.042095  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:32:19.283517  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-451719 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.351233523s)
--- PASS: TestFunctional/serial/StartWithProxy (52.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 11:32:26.280004  893715 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451719 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-451719 --alsologtostderr -v=8: (6.630452449s)
functional_test.go:663: soft start took 6.632769756s for "functional-451719" cluster.
I0127 11:32:32.911134  893715 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (6.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-451719 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 cache add registry.k8s.io/pause:3.1: (1.508617857s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 cache add registry.k8s.io/pause:3.3: (1.364521894s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 cache add registry.k8s.io/pause:latest: (1.218941709s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-451719 /tmp/TestFunctionalserialCacheCmdcacheadd_local139278727/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cache add minikube-local-cache-test:functional-451719
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cache delete minikube-local-cache-test:functional-451719
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-451719
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.301344ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cache reload
E0127 11:32:39.765745  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 cache reload: (1.124366131s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 kubectl -- --context functional-451719 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-451719 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451719 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 11:33:20.728035  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-451719 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.499454604s)
functional_test.go:761: restart took 46.499560266s for "functional-451719" cluster.
I0127 11:33:27.768726  893715 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (46.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-451719 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 logs: (1.705072302s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 logs --file /tmp/TestFunctionalserialLogsFileCmd3843374405/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 logs --file /tmp/TestFunctionalserialLogsFileCmd3843374405/001/logs.txt: (1.75021535s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-451719 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-451719
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-451719: exit status 115 (634.991764ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30669 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-451719 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-451719 delete -f testdata/invalidsvc.yaml: (1.077496558s)
--- PASS: TestFunctional/serial/InvalidService (4.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 config get cpus: exit status 14 (73.708475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 config get cpus: exit status 14 (91.239014ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-451719 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-451719 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 929534: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451719 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-451719 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (188.993219ms)

                                                
                                                
-- stdout --
	* [functional-451719] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:34:08.437984  929224 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:34:08.438364  929224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:34:08.438377  929224 out.go:358] Setting ErrFile to fd 2...
	I0127 11:34:08.438383  929224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:34:08.438819  929224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:34:08.439329  929224 out.go:352] Setting JSON to false
	I0127 11:34:08.440349  929224 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15394,"bootTime":1737962255,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 11:34:08.440542  929224 start.go:139] virtualization:  
	I0127 11:34:08.443736  929224 out.go:177] * [functional-451719] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 11:34:08.447447  929224 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:34:08.447514  929224 notify.go:220] Checking for updates...
	I0127 11:34:08.452897  929224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:34:08.455483  929224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 11:34:08.458126  929224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 11:34:08.461038  929224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:34:08.464367  929224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:34:08.467730  929224 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:34:08.468285  929224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:34:08.494695  929224 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:34:08.494819  929224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:34:08.558523  929224 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:34:08.547418214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:34:08.558638  929224 docker.go:318] overlay module found
	I0127 11:34:08.561631  929224 out.go:177] * Using the docker driver based on existing profile
	I0127 11:34:08.564170  929224 start.go:297] selected driver: docker
	I0127 11:34:08.564187  929224 start.go:901] validating driver "docker" against &{Name:functional-451719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-451719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:34:08.564307  929224 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:34:08.567527  929224 out.go:201] 
	W0127 11:34:08.570121  929224 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 11:34:08.572618  929224 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451719 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-451719 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-451719 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (190.65314ms)

                                                
                                                
-- stdout --
	* [functional-451719] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:34:08.255491  929178 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:34:08.255686  929178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:34:08.255699  929178 out.go:358] Setting ErrFile to fd 2...
	I0127 11:34:08.255705  929178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:34:08.256540  929178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:34:08.256923  929178 out.go:352] Setting JSON to false
	I0127 11:34:08.257991  929178 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15394,"bootTime":1737962255,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 11:34:08.258065  929178 start.go:139] virtualization:  
	I0127 11:34:08.261471  929178 out.go:177] * [functional-451719] minikube v1.35.0 sur Ubuntu 20.04 (arm64)
	I0127 11:34:08.264986  929178 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:34:08.265139  929178 notify.go:220] Checking for updates...
	I0127 11:34:08.270774  929178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:34:08.273441  929178 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 11:34:08.275934  929178 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 11:34:08.278434  929178 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 11:34:08.281083  929178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:34:08.284243  929178 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:34:08.284770  929178 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:34:08.311430  929178 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 11:34:08.311590  929178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:34:08.367923  929178 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:54 SystemTime:2025-01-27 11:34:08.358467988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:34:08.368042  929178 docker.go:318] overlay module found
	I0127 11:34:08.371773  929178 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0127 11:34:08.374432  929178 start.go:297] selected driver: docker
	I0127 11:34:08.374451  929178 start.go:901] validating driver "docker" against &{Name:functional-451719 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-451719 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:34:08.374560  929178 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:34:08.377931  929178 out.go:201] 
	W0127 11:34:08.380643  929178 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 11:34:08.383131  929178 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-451719 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-451719 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-8449669db6-pkdmf" [fe68c0ef-6c79-40e9-b7d4-0000a725d380] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-8449669db6-pkdmf" [fe68c0ef-6c79-40e9-b7d4-0000a725d380] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005503203s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30994
functional_test.go:1675: http://192.168.49.2:30994: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-8449669db6-pkdmf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30994
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8eec3f8c-5606-4930-9526-eaa63db41a73] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004137733s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-451719 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-451719 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-451719 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-451719 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [41b5c9d4-67c7-496a-80b2-ee9f607143d0] Pending
helpers_test.go:344: "sp-pod" [41b5c9d4-67c7-496a-80b2-ee9f607143d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [41b5c9d4-67c7-496a-80b2-ee9f607143d0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004215797s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-451719 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-451719 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-451719 delete -f testdata/storage-provisioner/pod.yaml: (1.620226533s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-451719 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0154f67f-5197-4ef0-94e2-6dab42fa85d0] Pending
helpers_test.go:344: "sp-pod" [0154f67f-5197-4ef0-94e2-6dab42fa85d0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005108138s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-451719 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh -n functional-451719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cp functional-451719:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd581701182/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh -n functional-451719 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh -n functional-451719 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/893715/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo cat /etc/test/nested/copy/893715/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/893715.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo cat /etc/ssl/certs/893715.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/893715.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo cat /usr/share/ca-certificates/893715.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/8937152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo cat /etc/ssl/certs/8937152.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/8937152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo cat /usr/share/ca-certificates/8937152.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-451719 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 ssh "sudo systemctl is-active docker": exit status 1 (327.461411ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 ssh "sudo systemctl is-active crio": exit status 1 (320.473159ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-451719 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-451719 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-451719 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-451719 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 926678: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-451719 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-451719 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ac3c5225-d26e-4474-92f5-1729724e90de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ac3c5225-d26e-4474-92f5-1729724e90de] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004181967s
I0127 11:33:45.798788  893715 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-451719 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.148.53 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-451719 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-451719 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-451719 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64fc58db8c-4w5hc" [ae33a946-9e2e-4bef-ac64-ee81cea750de] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64fc58db8c-4w5hc" [ae33a946-9e2e-4bef-ac64-ee81cea750de] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003949738s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "485.066129ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "75.194835ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 service list -o json
functional_test.go:1494: Took "605.232204ms" to run "out/minikube-linux-arm64 -p functional-451719 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "424.805591ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "58.230207ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdany-port1407283779/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737977645328810154" to /tmp/TestFunctionalparallelMountCmdany-port1407283779/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737977645328810154" to /tmp/TestFunctionalparallelMountCmdany-port1407283779/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737977645328810154" to /tmp/TestFunctionalparallelMountCmdany-port1407283779/001/test-1737977645328810154
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (461.829554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:34:05.791711  893715 retry.go:31] will retry after 354.487775ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 11:34 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 11:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 11:34 test-1737977645328810154
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh cat /mount-9p/test-1737977645328810154
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-451719 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [492dc316-16ac-4535-a187-3f7bfc6066a6] Pending
helpers_test.go:344: "busybox-mount" [492dc316-16ac-4535-a187-3f7bfc6066a6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [492dc316-16ac-4535-a187-3f7bfc6066a6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [492dc316-16ac-4535-a187-3f7bfc6066a6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003873841s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-451719 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdany-port1407283779/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31655
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31655
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdspecific-port2888948161/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdspecific-port2888948161/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 ssh "sudo umount -f /mount-9p": exit status 1 (314.44801ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-451719 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdspecific-port2888948161/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3129978902/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3129978902/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3129978902/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T" /mount1: exit status 1 (798.073006ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 11:34:14.785322  893715 retry.go:31] will retry after 543.402468ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-451719 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3129978902/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3129978902/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-451719 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3129978902/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 version -o=json --components: (1.348754908s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451719 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-451719
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-451719
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451719 image ls --format short --alsologtostderr:
I0127 11:34:26.112774  932265 out.go:345] Setting OutFile to fd 1 ...
I0127 11:34:26.113249  932265 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.113286  932265 out.go:358] Setting ErrFile to fd 2...
I0127 11:34:26.113912  932265 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.114238  932265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
I0127 11:34:26.114962  932265 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.115150  932265 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.115711  932265 cli_runner.go:164] Run: docker container inspect functional-451719 --format={{.State.Status}}
I0127 11:34:26.159021  932265 ssh_runner.go:195] Run: systemctl --version
I0127 11:34:26.159073  932265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451719
I0127 11:34:26.185975  932265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33577 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/functional-451719/id_rsa Username:docker}
I0127 11:34:26.282056  932265 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451719 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:781d90 | 68.5MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:293376 | 24MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-451719  | sha256:071652 | 989B   |
| docker.io/library/nginx                     | alpine             | sha256:f9d642 | 21.6MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:265c2d | 26.2MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:7fc9d4 | 67.9MB |
| docker.io/kicbase/echo-server               | functional-451719  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:2be0bc | 35.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e124fb | 27.4MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:ddb38c | 18.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451719 image ls --format table --alsologtostderr:
I0127 11:34:27.197974  932568 out.go:345] Setting OutFile to fd 1 ...
I0127 11:34:27.198180  932568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:27.198201  932568 out.go:358] Setting ErrFile to fd 2...
I0127 11:34:27.198228  932568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:27.198493  932568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
I0127 11:34:27.199234  932568 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:27.199393  932568 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:27.199941  932568 cli_runner.go:164] Run: docker container inspect functional-451719 --format={{.State.Status}}
I0127 11:34:27.222228  932568 ssh_runner.go:195] Run: systemctl --version
I0127 11:34:27.222287  932568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451719
I0127 11:34:27.257068  932568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33577 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/functional-451719/id_rsa Username:docker}
I0127 11:34:27.357586  932568 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451719 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-451719"],"size":"2173567"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8
fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"18922457"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d","repoDigests":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21565101"},{"id":"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"27363416"},{"id":"sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindn
etd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"35310383"},{"id":"sha256:071652135235ade0a8cb6ec6032534dd56ce2ee3d2327ff7c41131e5cee4adb8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-451719"],"size":"989"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"67941650"},{"id":"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b2
74365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"23968433"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670","rep
oDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"68507108"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"26217748"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451719 image ls --format json --alsologtostderr:
I0127 11:34:26.919939  932499 out.go:345] Setting OutFile to fd 1 ...
I0127 11:34:26.920060  932499 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.920071  932499 out.go:358] Setting ErrFile to fd 2...
I0127 11:34:26.920076  932499 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.920374  932499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
I0127 11:34:26.921134  932499 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.921296  932499 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.921776  932499 cli_runner.go:164] Run: docker container inspect functional-451719 --format={{.State.Status}}
I0127 11:34:26.947288  932499 ssh_runner.go:195] Run: systemctl --version
I0127 11:34:26.947346  932499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451719
I0127 11:34:26.976573  932499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33577 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/functional-451719/id_rsa Username:docker}
I0127 11:34:27.071247  932499 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-451719 image ls --format yaml --alsologtostderr:
- id: sha256:2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "35310383"
- id: sha256:f9d642c42f7bc79efd0a3aa2b7fe913e0324a23c2f27c6b7f3f112473d47131d
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "21565101"
- id: sha256:781d902f1e046dcb5aba879a2371b2b6494f97bad89f65a2c7308e78f8087670
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "68507108"
- id: sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "27363416"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-451719
size: "2173567"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "23968433"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:071652135235ade0a8cb6ec6032534dd56ce2ee3d2327ff7c41131e5cee4adb8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-451719
size: "989"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "67941650"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "26217748"
- id: sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "18922457"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451719 image ls --format yaml --alsologtostderr:
I0127 11:34:26.637567  932426 out.go:345] Setting OutFile to fd 1 ...
I0127 11:34:26.637690  932426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.637695  932426 out.go:358] Setting ErrFile to fd 2...
I0127 11:34:26.637700  932426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.638052  932426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
I0127 11:34:26.638895  932426 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.639021  932426 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.639499  932426 cli_runner.go:164] Run: docker container inspect functional-451719 --format={{.State.Status}}
I0127 11:34:26.665156  932426 ssh_runner.go:195] Run: systemctl --version
I0127 11:34:26.665213  932426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451719
I0127 11:34:26.691375  932426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33577 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/functional-451719/id_rsa Username:docker}
I0127 11:34:26.778420  932426 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-451719 ssh pgrep buildkitd: exit status 1 (334.844727ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image build -t localhost/my-image:functional-451719 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 image build -t localhost/my-image:functional-451719 testdata/build --alsologtostderr: (3.210612853s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-451719 image build -t localhost/my-image:functional-451719 testdata/build --alsologtostderr:
I0127 11:34:26.849198  932487 out.go:345] Setting OutFile to fd 1 ...
I0127 11:34:26.849851  932487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.849904  932487 out.go:358] Setting ErrFile to fd 2...
I0127 11:34:26.849926  932487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:34:26.850367  932487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
I0127 11:34:26.851598  932487 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.853868  932487 config.go:182] Loaded profile config "functional-451719": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 11:34:26.854474  932487 cli_runner.go:164] Run: docker container inspect functional-451719 --format={{.State.Status}}
I0127 11:34:26.899626  932487 ssh_runner.go:195] Run: systemctl --version
I0127 11:34:26.899678  932487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-451719
I0127 11:34:26.938365  932487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33577 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/functional-451719/id_rsa Username:docker}
I0127 11:34:27.041957  932487 build_images.go:161] Building image from path: /tmp/build.3666709231.tar
I0127 11:34:27.042037  932487 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 11:34:27.052191  932487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3666709231.tar
I0127 11:34:27.056256  932487 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3666709231.tar: stat -c "%s %y" /var/lib/minikube/build/build.3666709231.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3666709231.tar': No such file or directory
I0127 11:34:27.056290  932487 ssh_runner.go:362] scp /tmp/build.3666709231.tar --> /var/lib/minikube/build/build.3666709231.tar (3072 bytes)
I0127 11:34:27.085250  932487 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3666709231
I0127 11:34:27.104664  932487 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3666709231 -xf /var/lib/minikube/build/build.3666709231.tar
I0127 11:34:27.116987  932487 containerd.go:394] Building image: /var/lib/minikube/build/build.3666709231
I0127 11:34:27.117120  932487 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3666709231 --local dockerfile=/var/lib/minikube/build/build.3666709231 --output type=image,name=localhost/my-image:functional-451719
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:0a766550c8f1a2fdced0bbaeac282507c62953f982fa7c8a33351aa8ffbf7830 0.0s done
#8 exporting config sha256:2b7bf5deeba343e045fe6f863b887bcd3a5b37e589c3edc9bf6f08fa427afd49 0.0s done
#8 naming to localhost/my-image:functional-451719 done
#8 DONE 0.2s
I0127 11:34:29.940192  932487 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3666709231 --local dockerfile=/var/lib/minikube/build/build.3666709231 --output type=image,name=localhost/my-image:functional-451719: (2.823036496s)
I0127 11:34:29.940263  932487 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3666709231
I0127 11:34:29.949981  932487 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3666709231.tar
I0127 11:34:29.959109  932487 build_images.go:217] Built localhost/my-image:functional-451719 from /tmp/build.3666709231.tar
I0127 11:34:29.959144  932487 build_images.go:133] succeeded building to: functional-451719
I0127 11:34:29.959150  932487 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-451719
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image load --daemon kicbase/echo-server:functional-451719 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-451719 image load --daemon kicbase/echo-server:functional-451719 --alsologtostderr: (1.087746056s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image load --daemon kicbase/echo-server:functional-451719 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-451719
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image load --daemon kicbase/echo-server:functional-451719 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image save kicbase/echo-server:functional-451719 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
2025/01/27 11:34:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image rm kicbase/echo-server:functional-451719 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-451719
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 image save --daemon kicbase/echo-server:functional-451719 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-451719
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-451719 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-451719
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-451719
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-451719
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (137.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-737510 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 11:34:42.649702  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-737510 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m16.476129166s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (137.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- rollout status deployment/busybox
E0127 11:36:58.788380  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-737510 -- rollout status deployment/busybox: (27.174825353s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-hnbml -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-q6xhv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-rjlnc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-hnbml -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-q6xhv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-rjlnc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-hnbml -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-q6xhv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-rjlnc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-hnbml -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-hnbml -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-q6xhv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-q6xhv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-rjlnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-737510 -- exec busybox-58667487b6-rjlnc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-737510 -v=7 --alsologtostderr
E0127 11:37:26.493183  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-737510 -v=7 --alsologtostderr: (19.874220961s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr: (1.013691695s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-737510 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 status --output json -v=7 --alsologtostderr: (1.125025696s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp testdata/cp-test.txt ha-737510:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536316066/001/cp-test_ha-737510.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510:/home/docker/cp-test.txt ha-737510-m02:/home/docker/cp-test_ha-737510_ha-737510-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test_ha-737510_ha-737510-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510:/home/docker/cp-test.txt ha-737510-m03:/home/docker/cp-test_ha-737510_ha-737510-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test_ha-737510_ha-737510-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510:/home/docker/cp-test.txt ha-737510-m04:/home/docker/cp-test_ha-737510_ha-737510-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test_ha-737510_ha-737510-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp testdata/cp-test.txt ha-737510-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536316066/001/cp-test_ha-737510-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m02:/home/docker/cp-test.txt ha-737510:/home/docker/cp-test_ha-737510-m02_ha-737510.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test_ha-737510-m02_ha-737510.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m02:/home/docker/cp-test.txt ha-737510-m03:/home/docker/cp-test_ha-737510-m02_ha-737510-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test_ha-737510-m02_ha-737510-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m02:/home/docker/cp-test.txt ha-737510-m04:/home/docker/cp-test_ha-737510-m02_ha-737510-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test_ha-737510-m02_ha-737510-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp testdata/cp-test.txt ha-737510-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536316066/001/cp-test_ha-737510-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m03:/home/docker/cp-test.txt ha-737510:/home/docker/cp-test_ha-737510-m03_ha-737510.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test_ha-737510-m03_ha-737510.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m03:/home/docker/cp-test.txt ha-737510-m02:/home/docker/cp-test_ha-737510-m03_ha-737510-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test_ha-737510-m03_ha-737510-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m03:/home/docker/cp-test.txt ha-737510-m04:/home/docker/cp-test_ha-737510-m03_ha-737510-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test_ha-737510-m03_ha-737510-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp testdata/cp-test.txt ha-737510-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2536316066/001/cp-test_ha-737510-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m04:/home/docker/cp-test.txt ha-737510:/home/docker/cp-test_ha-737510-m04_ha-737510.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510 "sudo cat /home/docker/cp-test_ha-737510-m04_ha-737510.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m04:/home/docker/cp-test.txt ha-737510-m02:/home/docker/cp-test_ha-737510-m04_ha-737510-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m02 "sudo cat /home/docker/cp-test_ha-737510-m04_ha-737510-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 cp ha-737510-m04:/home/docker/cp-test.txt ha-737510-m03:/home/docker/cp-test_ha-737510-m04_ha-737510-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 ssh -n ha-737510-m03 "sudo cat /home/docker/cp-test_ha-737510-m04_ha-737510-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 node stop m02 -v=7 --alsologtostderr: (12.107826564s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr: exit status 7 (750.893222ms)

                                                
                                                
-- stdout --
	ha-737510
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-737510-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-737510-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-737510-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:38:15.864831  949204 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:38:15.865011  949204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:38:15.865022  949204 out.go:358] Setting ErrFile to fd 2...
	I0127 11:38:15.865064  949204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:38:15.865326  949204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:38:15.865510  949204 out.go:352] Setting JSON to false
	I0127 11:38:15.865546  949204 mustload.go:65] Loading cluster: ha-737510
	I0127 11:38:15.865636  949204 notify.go:220] Checking for updates...
	I0127 11:38:15.865970  949204 config.go:182] Loaded profile config "ha-737510": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:38:15.865988  949204 status.go:174] checking status of ha-737510 ...
	I0127 11:38:15.867560  949204 cli_runner.go:164] Run: docker container inspect ha-737510 --format={{.State.Status}}
	I0127 11:38:15.887346  949204 status.go:371] ha-737510 host status = "Running" (err=<nil>)
	I0127 11:38:15.887378  949204 host.go:66] Checking if "ha-737510" exists ...
	I0127 11:38:15.887686  949204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-737510
	I0127 11:38:15.910030  949204 host.go:66] Checking if "ha-737510" exists ...
	I0127 11:38:15.910380  949204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:15.910432  949204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-737510
	I0127 11:38:15.932708  949204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33582 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/ha-737510/id_rsa Username:docker}
	I0127 11:38:16.022931  949204 ssh_runner.go:195] Run: systemctl --version
	I0127 11:38:16.028450  949204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:16.040820  949204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:38:16.100932  949204 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:73 SystemTime:2025-01-27 11:38:16.091404469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:38:16.101594  949204 kubeconfig.go:125] found "ha-737510" server: "https://192.168.49.254:8443"
	I0127 11:38:16.101629  949204 api_server.go:166] Checking apiserver status ...
	I0127 11:38:16.101681  949204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:38:16.118570  949204 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1524/cgroup
	I0127 11:38:16.130053  949204 api_server.go:182] apiserver freezer: "3:freezer:/docker/867301e3d4e9df437a8681505303680334cbc2fd81f48bb257f03f4b529368a0/kubepods/burstable/podcae1a606619a8b0d8aaff2358de43c6e/2242ab18ba5510bb8932e4e0a2d6d86508a0ce97b30809ccd1e30d196cfdc981"
	I0127 11:38:16.130128  949204 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/867301e3d4e9df437a8681505303680334cbc2fd81f48bb257f03f4b529368a0/kubepods/burstable/podcae1a606619a8b0d8aaff2358de43c6e/2242ab18ba5510bb8932e4e0a2d6d86508a0ce97b30809ccd1e30d196cfdc981/freezer.state
	I0127 11:38:16.138738  949204 api_server.go:204] freezer state: "THAWED"
	I0127 11:38:16.138767  949204 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 11:38:16.148145  949204 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 11:38:16.148172  949204 status.go:463] ha-737510 apiserver status = Running (err=<nil>)
	I0127 11:38:16.148228  949204 status.go:176] ha-737510 status: &{Name:ha-737510 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:16.148257  949204 status.go:174] checking status of ha-737510-m02 ...
	I0127 11:38:16.148581  949204 cli_runner.go:164] Run: docker container inspect ha-737510-m02 --format={{.State.Status}}
	I0127 11:38:16.166437  949204 status.go:371] ha-737510-m02 host status = "Stopped" (err=<nil>)
	I0127 11:38:16.166461  949204 status.go:384] host is not running, skipping remaining checks
	I0127 11:38:16.166469  949204 status.go:176] ha-737510-m02 status: &{Name:ha-737510-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:16.166490  949204 status.go:174] checking status of ha-737510-m03 ...
	I0127 11:38:16.166810  949204 cli_runner.go:164] Run: docker container inspect ha-737510-m03 --format={{.State.Status}}
	I0127 11:38:16.184854  949204 status.go:371] ha-737510-m03 host status = "Running" (err=<nil>)
	I0127 11:38:16.184880  949204 host.go:66] Checking if "ha-737510-m03" exists ...
	I0127 11:38:16.185307  949204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-737510-m03
	I0127 11:38:16.202761  949204 host.go:66] Checking if "ha-737510-m03" exists ...
	I0127 11:38:16.203076  949204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:16.203122  949204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-737510-m03
	I0127 11:38:16.229962  949204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33592 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/ha-737510-m03/id_rsa Username:docker}
	I0127 11:38:16.322479  949204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:16.334237  949204 kubeconfig.go:125] found "ha-737510" server: "https://192.168.49.254:8443"
	I0127 11:38:16.334266  949204 api_server.go:166] Checking apiserver status ...
	I0127 11:38:16.334313  949204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:38:16.344857  949204 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	I0127 11:38:16.356880  949204 api_server.go:182] apiserver freezer: "3:freezer:/docker/f4e7929cec1ee9448fd63874b655690f4793117d8348bc8b1bac834a8c374261/kubepods/burstable/pod24ac97d611488de9858a7ab613183594/3c9394fc935ea6b547fb807fa53500b795431a31e9a6928a10bdaa777201cd1a"
	I0127 11:38:16.356948  949204 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f4e7929cec1ee9448fd63874b655690f4793117d8348bc8b1bac834a8c374261/kubepods/burstable/pod24ac97d611488de9858a7ab613183594/3c9394fc935ea6b547fb807fa53500b795431a31e9a6928a10bdaa777201cd1a/freezer.state
	I0127 11:38:16.365721  949204 api_server.go:204] freezer state: "THAWED"
	I0127 11:38:16.365753  949204 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0127 11:38:16.373979  949204 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0127 11:38:16.374007  949204 status.go:463] ha-737510-m03 apiserver status = Running (err=<nil>)
	I0127 11:38:16.374017  949204 status.go:176] ha-737510-m03 status: &{Name:ha-737510-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:16.374057  949204 status.go:174] checking status of ha-737510-m04 ...
	I0127 11:38:16.374382  949204 cli_runner.go:164] Run: docker container inspect ha-737510-m04 --format={{.State.Status}}
	I0127 11:38:16.391216  949204 status.go:371] ha-737510-m04 host status = "Running" (err=<nil>)
	I0127 11:38:16.391242  949204 host.go:66] Checking if "ha-737510-m04" exists ...
	I0127 11:38:16.391526  949204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-737510-m04
	I0127 11:38:16.418449  949204 host.go:66] Checking if "ha-737510-m04" exists ...
	I0127 11:38:16.418767  949204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:16.418820  949204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-737510-m04
	I0127 11:38:16.435671  949204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33597 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/ha-737510-m04/id_rsa Username:docker}
	I0127 11:38:16.521972  949204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:16.534739  949204 status.go:176] ha-737510-m04 status: &{Name:ha-737510-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 node start m02 -v=7 --alsologtostderr: (18.289348577s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr: (1.020267617s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0127 11:38:37.084598  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:37.091422  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:37.102793  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:37.124068  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:37.165402  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:37.246797  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:37.408426  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:37.730363  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019702798s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (146.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-737510 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-737510 -v=7 --alsologtostderr
E0127 11:38:38.372352  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:39.653654  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:42.215984  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:47.338006  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:38:57.579574  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-737510 -v=7 --alsologtostderr: (37.103959603s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-737510 --wait=true -v=7 --alsologtostderr
E0127 11:39:18.060935  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 11:39:59.022600  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-737510 --wait=true -v=7 --alsologtostderr: (1m48.867823249s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-737510
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (146.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 node delete m03 -v=7 --alsologtostderr: (9.790046569s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 stop -v=7 --alsologtostderr
E0127 11:41:20.944714  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 stop -v=7 --alsologtostderr: (35.736711313s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr: exit status 7 (116.807537ms)

                                                
                                                
-- stdout --
	ha-737510
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-737510-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-737510-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:41:51.195174  963773 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:41:51.195378  963773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:41:51.195936  963773 out.go:358] Setting ErrFile to fd 2...
	I0127 11:41:51.195957  963773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:41:51.196198  963773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:41:51.196397  963773 out.go:352] Setting JSON to false
	I0127 11:41:51.196434  963773 mustload.go:65] Loading cluster: ha-737510
	I0127 11:41:51.196874  963773 config.go:182] Loaded profile config "ha-737510": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:41:51.196895  963773 status.go:174] checking status of ha-737510 ...
	I0127 11:41:51.197143  963773 notify.go:220] Checking for updates...
	I0127 11:41:51.197884  963773 cli_runner.go:164] Run: docker container inspect ha-737510 --format={{.State.Status}}
	I0127 11:41:51.216892  963773 status.go:371] ha-737510 host status = "Stopped" (err=<nil>)
	I0127 11:41:51.216913  963773 status.go:384] host is not running, skipping remaining checks
	I0127 11:41:51.216920  963773 status.go:176] ha-737510 status: &{Name:ha-737510 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:41:51.216958  963773 status.go:174] checking status of ha-737510-m02 ...
	I0127 11:41:51.217295  963773 cli_runner.go:164] Run: docker container inspect ha-737510-m02 --format={{.State.Status}}
	I0127 11:41:51.241296  963773 status.go:371] ha-737510-m02 host status = "Stopped" (err=<nil>)
	I0127 11:41:51.241314  963773 status.go:384] host is not running, skipping remaining checks
	I0127 11:41:51.241321  963773 status.go:176] ha-737510-m02 status: &{Name:ha-737510-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:41:51.241339  963773 status.go:174] checking status of ha-737510-m04 ...
	I0127 11:41:51.241645  963773 cli_runner.go:164] Run: docker container inspect ha-737510-m04 --format={{.State.Status}}
	I0127 11:41:51.264309  963773 status.go:371] ha-737510-m04 host status = "Stopped" (err=<nil>)
	I0127 11:41:51.264328  963773 status.go:384] host is not running, skipping remaining checks
	I0127 11:41:51.264336  963773 status.go:176] ha-737510-m04 status: &{Name:ha-737510-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (73.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-737510 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 11:41:58.788582  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-737510 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m13.004508436s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (73.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-737510 --control-plane -v=7 --alsologtostderr
E0127 11:43:37.084437  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-737510 --control-plane -v=7 --alsologtostderr: (39.312087112s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-737510 status -v=7 --alsologtostderr: (1.012813039s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.083636278s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-294788 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0127 11:44:04.786016  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-294788 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m26.009620346s)
--- PASS: TestJSONOutput/start/Command (86.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-294788 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-294788 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-294788 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-294788 --output=json --user=testUser: (5.738633453s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-485025 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-485025 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.542257ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ac79b5c1-84df-47bc-ad55-351fe7828ad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-485025] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb5d426e-a06d-4e84-ae89-1f66faa70b38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20318"}}
	{"specversion":"1.0","id":"52a5f2b1-c717-45ff-95fb-7697e5cb63cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e16e1a9-ed16-4b69-ae8b-6a8ffe73e41a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig"}}
	{"specversion":"1.0","id":"8ae6bbf3-9acb-4d42-9286-1973f04d8eb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube"}}
	{"specversion":"1.0","id":"516dad82-3d44-4ee5-a0bb-20b9eedd2ea1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7c7af1cc-0a9d-4923-8dd1-0f45811bafe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5ab451a6-5c5c-484c-b898-aae7f1ed9e6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-485025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-485025
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-841540 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-841540 --network=: (39.369313228s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-841540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-841540
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-841540: (2.10764691s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-437815 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-437815 --network=bridge: (31.93794994s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-437815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-437815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-437815: (2.018163654s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.98s)

                                                
                                    
x
+
TestKicExistingNetwork (35.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0127 11:46:48.954264  893715 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0127 11:46:48.969670  893715 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0127 11:46:48.969761  893715 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0127 11:46:48.969783  893715 cli_runner.go:164] Run: docker network inspect existing-network
W0127 11:46:48.985171  893715 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0127 11:46:48.985201  893715 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0127 11:46:48.985219  893715 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0127 11:46:48.985414  893715 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0127 11:46:49.002190  893715 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2217238752e2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:92:9d:42:1b} reservation:<nil>}
I0127 11:46:49.002617  893715 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c8d130}
I0127 11:46:49.002647  893715 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0127 11:46:49.002707  893715 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0127 11:46:49.075877  893715 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-902275 --network=existing-network
E0127 11:46:58.788650  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-902275 --network=existing-network: (33.126225186s)
helpers_test.go:175: Cleaning up "existing-network-902275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-902275
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-902275: (2.01022687s)
I0127 11:47:24.228950  893715 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.29s)

                                                
                                    
x
+
TestKicCustomSubnet (33.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-941169 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-941169 --subnet=192.168.60.0/24: (31.212792884s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-941169 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-941169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-941169
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-941169: (2.083306986s)
--- PASS: TestKicCustomSubnet (33.32s)

                                                
                                    
x
+
TestKicStaticIP (32.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-707548 --static-ip=192.168.200.200
E0127 11:48:21.857153  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-707548 --static-ip=192.168.200.200: (30.6513151s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-707548 ip
helpers_test.go:175: Cleaning up "static-ip-707548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-707548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-707548: (2.077216521s)
--- PASS: TestKicStaticIP (32.89s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (66.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-473866 --driver=docker  --container-runtime=containerd
E0127 11:48:37.089191  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-473866 --driver=docker  --container-runtime=containerd: (31.519678696s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-476369 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-476369 --driver=docker  --container-runtime=containerd: (29.962470153s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-473866
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-476369
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-476369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-476369
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-476369: (2.018256883s)
helpers_test.go:175: Cleaning up "first-473866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-473866
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-473866: (1.990709882s)
--- PASS: TestMinikubeProfile (66.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-945676 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-945676 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.949644253s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-945676 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-947547 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-947547 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.536271185s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-947547 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-945676 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-945676 --alsologtostderr -v=5: (1.614625473s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-947547 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-947547
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-947547: (1.217971497s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-947547
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-947547: (6.183117442s)
--- PASS: TestMountStart/serial/RestartStopped (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-947547 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-407627 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-407627 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.135592083s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-407627 -- rollout status deployment/busybox: (19.003559385s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-lnhgb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-nnv5r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-lnhgb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-nnv5r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-lnhgb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-nnv5r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-lnhgb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-lnhgb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-nnv5r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-407627 -- exec busybox-58667487b6-nnv5r -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-407627 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-407627 -v 3 --alsologtostderr: (17.841717731s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-407627 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp testdata/cp-test.txt multinode-407627:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1697642493/001/cp-test_multinode-407627.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627:/home/docker/cp-test.txt multinode-407627-m02:/home/docker/cp-test_multinode-407627_multinode-407627-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m02 "sudo cat /home/docker/cp-test_multinode-407627_multinode-407627-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627:/home/docker/cp-test.txt multinode-407627-m03:/home/docker/cp-test_multinode-407627_multinode-407627-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m03 "sudo cat /home/docker/cp-test_multinode-407627_multinode-407627-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp testdata/cp-test.txt multinode-407627-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1697642493/001/cp-test_multinode-407627-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627-m02:/home/docker/cp-test.txt multinode-407627:/home/docker/cp-test_multinode-407627-m02_multinode-407627.txt
E0127 11:51:58.789123  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627 "sudo cat /home/docker/cp-test_multinode-407627-m02_multinode-407627.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627-m02:/home/docker/cp-test.txt multinode-407627-m03:/home/docker/cp-test_multinode-407627-m02_multinode-407627-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m03 "sudo cat /home/docker/cp-test_multinode-407627-m02_multinode-407627-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp testdata/cp-test.txt multinode-407627-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1697642493/001/cp-test_multinode-407627-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627-m03:/home/docker/cp-test.txt multinode-407627:/home/docker/cp-test_multinode-407627-m03_multinode-407627.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627 "sudo cat /home/docker/cp-test_multinode-407627-m03_multinode-407627.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 cp multinode-407627-m03:/home/docker/cp-test.txt multinode-407627-m02:/home/docker/cp-test_multinode-407627-m03_multinode-407627-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 ssh -n multinode-407627-m02 "sudo cat /home/docker/cp-test_multinode-407627-m03_multinode-407627-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-407627 node stop m03: (1.216083511s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-407627 status: exit status 7 (521.794293ms)

                                                
                                                
-- stdout --
	multinode-407627
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-407627-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-407627-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr: exit status 7 (499.262096ms)

                                                
                                                
-- stdout --
	multinode-407627
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-407627-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-407627-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:52:05.564527 1017989 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:52:05.564698 1017989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:52:05.564728 1017989 out.go:358] Setting ErrFile to fd 2...
	I0127 11:52:05.564755 1017989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:52:05.565005 1017989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:52:05.565242 1017989 out.go:352] Setting JSON to false
	I0127 11:52:05.565323 1017989 mustload.go:65] Loading cluster: multinode-407627
	I0127 11:52:05.565402 1017989 notify.go:220] Checking for updates...
	I0127 11:52:05.566355 1017989 config.go:182] Loaded profile config "multinode-407627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:52:05.566409 1017989 status.go:174] checking status of multinode-407627 ...
	I0127 11:52:05.567081 1017989 cli_runner.go:164] Run: docker container inspect multinode-407627 --format={{.State.Status}}
	I0127 11:52:05.584608 1017989 status.go:371] multinode-407627 host status = "Running" (err=<nil>)
	I0127 11:52:05.584638 1017989 host.go:66] Checking if "multinode-407627" exists ...
	I0127 11:52:05.584945 1017989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-407627
	I0127 11:52:05.605972 1017989 host.go:66] Checking if "multinode-407627" exists ...
	I0127 11:52:05.606268 1017989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:52:05.606323 1017989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-407627
	I0127 11:52:05.625311 1017989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33702 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/multinode-407627/id_rsa Username:docker}
	I0127 11:52:05.710471 1017989 ssh_runner.go:195] Run: systemctl --version
	I0127 11:52:05.714922 1017989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:52:05.726200 1017989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 11:52:05.786090 1017989 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2025-01-27 11:52:05.776958072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 11:52:05.786705 1017989 kubeconfig.go:125] found "multinode-407627" server: "https://192.168.67.2:8443"
	I0127 11:52:05.786735 1017989 api_server.go:166] Checking apiserver status ...
	I0127 11:52:05.786793 1017989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:52:05.798314 1017989 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	I0127 11:52:05.807505 1017989 api_server.go:182] apiserver freezer: "3:freezer:/docker/f400dab9087e0f07794050cf3b5d390412d07a0b27178cbf35999fd1734b8e97/kubepods/burstable/podb4d5e21112e46a0ebc9223df2dbb1b07/5b9a9ee4999b9cb7169a27107ae3d85efcf2455a99bd41e64a90b13203344a37"
	I0127 11:52:05.807577 1017989 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f400dab9087e0f07794050cf3b5d390412d07a0b27178cbf35999fd1734b8e97/kubepods/burstable/podb4d5e21112e46a0ebc9223df2dbb1b07/5b9a9ee4999b9cb7169a27107ae3d85efcf2455a99bd41e64a90b13203344a37/freezer.state
	I0127 11:52:05.816635 1017989 api_server.go:204] freezer state: "THAWED"
	I0127 11:52:05.816675 1017989 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0127 11:52:05.824909 1017989 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0127 11:52:05.824938 1017989 status.go:463] multinode-407627 apiserver status = Running (err=<nil>)
	I0127 11:52:05.824949 1017989 status.go:176] multinode-407627 status: &{Name:multinode-407627 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:52:05.824968 1017989 status.go:174] checking status of multinode-407627-m02 ...
	I0127 11:52:05.825362 1017989 cli_runner.go:164] Run: docker container inspect multinode-407627-m02 --format={{.State.Status}}
	I0127 11:52:05.842013 1017989 status.go:371] multinode-407627-m02 host status = "Running" (err=<nil>)
	I0127 11:52:05.842038 1017989 host.go:66] Checking if "multinode-407627-m02" exists ...
	I0127 11:52:05.842349 1017989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-407627-m02
	I0127 11:52:05.859193 1017989 host.go:66] Checking if "multinode-407627-m02" exists ...
	I0127 11:52:05.859567 1017989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:52:05.859619 1017989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-407627-m02
	I0127 11:52:05.876712 1017989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33707 SSHKeyPath:/home/jenkins/minikube-integration/20318-888339/.minikube/machines/multinode-407627-m02/id_rsa Username:docker}
	I0127 11:52:05.962304 1017989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:52:05.974574 1017989 status.go:176] multinode-407627-m02 status: &{Name:multinode-407627-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:52:05.974611 1017989 status.go:174] checking status of multinode-407627-m03 ...
	I0127 11:52:05.974959 1017989 cli_runner.go:164] Run: docker container inspect multinode-407627-m03 --format={{.State.Status}}
	I0127 11:52:05.991735 1017989 status.go:371] multinode-407627-m03 host status = "Stopped" (err=<nil>)
	I0127 11:52:05.991755 1017989 status.go:384] host is not running, skipping remaining checks
	I0127 11:52:05.991762 1017989 status.go:176] multinode-407627-m03 status: &{Name:multinode-407627-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-407627 node start m03 -v=7 --alsologtostderr: (8.758358224s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (84.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-407627
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-407627
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-407627: (24.826626864s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-407627 --wait=true -v=8 --alsologtostderr
E0127 11:53:37.084725  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-407627 --wait=true -v=8 --alsologtostderr: (59.209014034s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-407627
--- PASS: TestMultiNode/serial/RestartKeepsNodes (84.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-407627 node delete m03: (4.607844267s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-407627 stop: (23.705449553s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-407627 status: exit status 7 (89.461852ms)

                                                
                                                
-- stdout --
	multinode-407627
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-407627-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr: exit status 7 (95.778812ms)

                                                
                                                
-- stdout --
	multinode-407627
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-407627-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:54:08.796315 1026030 out.go:345] Setting OutFile to fd 1 ...
	I0127 11:54:08.796520 1026030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:08.796547 1026030 out.go:358] Setting ErrFile to fd 2...
	I0127 11:54:08.796566 1026030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:54:08.796840 1026030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 11:54:08.797086 1026030 out.go:352] Setting JSON to false
	I0127 11:54:08.797157 1026030 mustload.go:65] Loading cluster: multinode-407627
	I0127 11:54:08.797188 1026030 notify.go:220] Checking for updates...
	I0127 11:54:08.797927 1026030 config.go:182] Loaded profile config "multinode-407627": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 11:54:08.797979 1026030 status.go:174] checking status of multinode-407627 ...
	I0127 11:54:08.798571 1026030 cli_runner.go:164] Run: docker container inspect multinode-407627 --format={{.State.Status}}
	I0127 11:54:08.817335 1026030 status.go:371] multinode-407627 host status = "Stopped" (err=<nil>)
	I0127 11:54:08.817358 1026030 status.go:384] host is not running, skipping remaining checks
	I0127 11:54:08.817364 1026030 status.go:176] multinode-407627 status: &{Name:multinode-407627 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:54:08.817397 1026030 status.go:174] checking status of multinode-407627-m02 ...
	I0127 11:54:08.817726 1026030 cli_runner.go:164] Run: docker container inspect multinode-407627-m02 --format={{.State.Status}}
	I0127 11:54:08.837839 1026030 status.go:371] multinode-407627-m02 host status = "Stopped" (err=<nil>)
	I0127 11:54:08.837863 1026030 status.go:384] host is not running, skipping remaining checks
	I0127 11:54:08.837870 1026030 status.go:176] multinode-407627-m02 status: &{Name:multinode-407627-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-407627 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0127 11:55:00.151250  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-407627 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.530043573s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-407627 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-407627
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-407627-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-407627-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.709718ms)

                                                
                                                
-- stdout --
	* [multinode-407627-m02] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-407627-m02' is duplicated with machine name 'multinode-407627-m02' in profile 'multinode-407627'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-407627-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-407627-m03 --driver=docker  --container-runtime=containerd: (30.784348406s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-407627
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-407627: exit status 80 (347.885944ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-407627 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-407627-m03 already exists in multinode-407627-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-407627-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-407627-m03: (1.980873049s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.27s)

                                                
                                    
x
+
TestPreload (123.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-010440 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0127 11:56:58.789148  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-010440 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m26.869173386s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-010440 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-010440 image pull gcr.io/k8s-minikube/busybox: (2.101867119s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-010440
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-010440: (11.959812318s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-010440 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-010440 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.480387844s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-010440 image list
helpers_test.go:175: Cleaning up "test-preload-010440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-010440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-010440: (2.628011222s)
--- PASS: TestPreload (123.39s)

                                                
                                    
x
+
TestInsufficientStorage (10.24s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-943970 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-943970 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.768502783s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17c129d0-93e9-446d-b40b-1c7632d4f820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-943970] minikube v1.35.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a635a766-6ac5-418e-85b0-56ead68e4da5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20318"}}
	{"specversion":"1.0","id":"64b76934-1ba0-49c6-811c-1e9915c93128","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b42d9119-3b56-4718-9e7e-f09831257e8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig"}}
	{"specversion":"1.0","id":"a4bb65ed-3f07-445a-a553-ad371aad0cc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube"}}
	{"specversion":"1.0","id":"9a97e03b-cdc1-4e76-a152-6cc5f8615284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"090e35d2-1d3e-4333-a165-7bc03cf528d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fda7f791-b90a-459d-bc2f-9f930d929b0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a5943337-304e-48f8-8bca-00adec039572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"22684e83-63ce-4ecd-8316-e0dbc55bd546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"747608a1-7430-44bb-826a-dda816f36f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"245117e3-36f6-475d-b067-5729e3c01fe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-943970\" primary control-plane node in \"insufficient-storage-943970\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"56e5f521-bc31-4c4a-9ce4-ec61bad7988b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a680122-1d03-4760-acfd-f2b008c19c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"124883f7-120d-4ef5-a6f2-8085dbec5fbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-943970 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-943970 --output=json --layout=cluster: exit status 7 (282.08051ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-943970","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-943970","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 11:58:26.151192 1044377 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-943970" does not appear in /home/jenkins/minikube-integration/20318-888339/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-943970 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-943970 --output=json --layout=cluster: exit status 7 (281.739915ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-943970","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-943970","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 11:58:26.433655 1044437 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-943970" does not appear in /home/jenkins/minikube-integration/20318-888339/kubeconfig
	E0127 11:58:26.444028 1044437 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/insufficient-storage-943970/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-943970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-943970
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-943970: (1.909478694s)
--- PASS: TestInsufficientStorage (10.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (99.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.884672090 start -p running-upgrade-001027 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.884672090 start -p running-upgrade-001027 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (59.086337145s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-001027 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-001027 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.104014057s)
helpers_test.go:175: Cleaning up "running-upgrade-001027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-001027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-001027: (2.421641175s)
--- PASS: TestRunningBinaryUpgrade (99.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (188.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.863662548s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-651496
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-651496: (1.272305498s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-651496 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-651496 status --format={{.Host}}: exit status 7 (88.301578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m58.658739652s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-651496 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (117.154198ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-651496] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-651496
	    minikube start -p kubernetes-upgrade-651496 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6514962 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-651496 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-651496 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.996372017s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-651496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-651496
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-651496: (2.303603743s)
--- PASS: TestKubernetesUpgrade (188.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (183.58s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4129864323 start -p missing-upgrade-635755 --memory=2200 --driver=docker  --container-runtime=containerd
E0127 11:58:37.083970  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4129864323 start -p missing-upgrade-635755 --memory=2200 --driver=docker  --container-runtime=containerd: (1m36.507253727s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-635755
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-635755: (10.297650674s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-635755
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-635755 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-635755 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m13.737775935s)
helpers_test.go:175: Cleaning up "missing-upgrade-635755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-635755
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-635755: (2.373336383s)
--- PASS: TestMissingContainerUpgrade (183.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824149 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-824149 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (103.97115ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-824149] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824149 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824149 --driver=docker  --container-runtime=containerd: (36.785455361s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-824149 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824149 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824149 --no-kubernetes --driver=docker  --container-runtime=containerd: (18.914232249s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-824149 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-824149 status -o json: exit status 2 (380.578681ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-824149","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-824149
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-824149: (2.016259716s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824149 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824149 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.539590762s)
--- PASS: TestNoKubernetes/serial/Start (5.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-824149 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-824149 "sudo systemctl is-active --quiet service kubelet": exit status 1 (253.969113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-824149
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-824149: (1.210485415s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824149 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824149 --driver=docker  --container-runtime=containerd: (6.799568617s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-824149 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-824149 "sudo systemctl is-active --quiet service kubelet": exit status 1 (372.221846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.904426926 start -p stopped-upgrade-999062 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0127 12:01:58.791855  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.904426926 start -p stopped-upgrade-999062 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.881149657s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.904426926 -p stopped-upgrade-999062 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.904426926 -p stopped-upgrade-999062 stop: (19.977313063s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-999062 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-999062 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.400244953s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-999062
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestPause/serial/Start (92.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-367356 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0127 12:03:37.084376  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-367356 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.066371636s)
--- PASS: TestPause/serial/Start (92.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-367356 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0127 12:05:01.858720  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-367356 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.264963565s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.29s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-367356 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-367356 --alsologtostderr -v=5: (1.028163153s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-367356 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-367356 --output=json --layout=cluster: exit status 2 (393.85183ms)

                                                
                                                
-- stdout --
	{"Name":"pause-367356","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-367356","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-367356 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-367356 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-367356 --alsologtostderr -v=5: (1.12954576s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-367356 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-367356 --alsologtostderr -v=5: (3.267899534s)
--- PASS: TestPause/serial/DeletePaused (3.27s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-367356
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-367356: exit status 1 (27.795556ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-367356: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-106238 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-106238 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (270.065656ms)

                                                
                                                
-- stdout --
	* [false-106238] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:05:15.095931 1082581 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:05:15.096174 1082581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:05:15.096203 1082581 out.go:358] Setting ErrFile to fd 2...
	I0127 12:05:15.096222 1082581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:05:15.096507 1082581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20318-888339/.minikube/bin
	I0127 12:05:15.096981 1082581 out.go:352] Setting JSON to false
	I0127 12:05:15.098105 1082581 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17260,"bootTime":1737962255,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0127 12:05:15.098237 1082581 start.go:139] virtualization:  
	I0127 12:05:15.102039 1082581 out.go:177] * [false-106238] minikube v1.35.0 on Ubuntu 20.04 (arm64)
	I0127 12:05:15.105282 1082581 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:05:15.105351 1082581 notify.go:220] Checking for updates...
	I0127 12:05:15.110902 1082581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:05:15.113790 1082581 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20318-888339/kubeconfig
	I0127 12:05:15.116600 1082581 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20318-888339/.minikube
	I0127 12:05:15.119481 1082581 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0127 12:05:15.122440 1082581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:05:15.125802 1082581 config.go:182] Loaded profile config "force-systemd-env-947488": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 12:05:15.125992 1082581 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:05:15.157580 1082581 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0127 12:05:15.157716 1082581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 12:05:15.252619 1082581 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-01-27 12:05:15.242895465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
	I0127 12:05:15.252734 1082581 docker.go:318] overlay module found
	I0127 12:05:15.256008 1082581 out.go:177] * Using the docker driver based on user configuration
	I0127 12:05:15.258758 1082581 start.go:297] selected driver: docker
	I0127 12:05:15.258781 1082581 start.go:901] validating driver "docker" against <nil>
	I0127 12:05:15.258796 1082581 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:05:15.264810 1082581 out.go:201] 
	W0127 12:05:15.267751 1082581 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 12:05:15.270395 1082581 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-106238 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-106238" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-106238

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-106238"

                                                
                                                
----------------------- debugLogs end: false-106238 [took: 5.401101907s] --------------------------------
helpers_test.go:175: Cleaning up "false-106238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-106238
--- PASS: TestNetworkPlugins/group/false (5.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-999803 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0127 12:06:58.789073  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:08:37.084005  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-999803 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m24.585515274s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-999803 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [036565b7-e7f4-4824-a0c4-00679b5421ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [036565b7-e7f4-4824-a0c4-00679b5421ed] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004265597s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-999803 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-999803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-999803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.479232536s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-999803 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-999803 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-999803 --alsologtostderr -v=3: (12.784740243s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-835765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-835765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m3.536319205s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-999803 -n old-k8s-version-999803
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-999803 -n old-k8s-version-999803: exit status 7 (144.514409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-999803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-835765 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [84e849b5-b428-474a-9404-2083ecd0b3e1] Pending
helpers_test.go:344: "busybox" [84e849b5-b428-474a-9404-2083ecd0b3e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [84e849b5-b428-474a-9404-2083ecd0b3e1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004740871s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-835765 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-835765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-835765 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002150341s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-835765 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-835765 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-835765 --alsologtostderr -v=3: (12.109630885s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-835765 -n no-preload-835765
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-835765 -n no-preload-835765: exit status 7 (72.47562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-835765 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-835765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:11:40.152959  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:11:58.788916  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:13:37.084428  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-835765 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m49.119023353s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-835765 -n no-preload-835765
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qwgv5" [d66094c8-5c1a-4aaa-a14c-27954c4c5434] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003812408s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qwgv5" [d66094c8-5c1a-4aaa-a14c-27954c4c5434] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005268s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-835765 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-835765 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-835765 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-835765 --alsologtostderr -v=1: (1.252270976s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-835765 -n no-preload-835765
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-835765 -n no-preload-835765: exit status 2 (325.349291ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-835765 -n no-preload-835765
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-835765 -n no-preload-835765: exit status 2 (323.970083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-835765 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-835765 -n no-preload-835765
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-835765 -n no-preload-835765
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rjt7d" [507f9065-3e95-4715-9e6e-43e0e9e30385] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004957834s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-639161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-639161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m39.930302221s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rjt7d" [507f9065-3e95-4715-9e6e-43e0e9e30385] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004815165s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-999803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-999803 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-999803 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-999803 --alsologtostderr -v=1: (1.356971104s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-999803 -n old-k8s-version-999803
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-999803 -n old-k8s-version-999803: exit status 2 (355.009257ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-999803 -n old-k8s-version-999803
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-999803 -n old-k8s-version-999803: exit status 2 (333.082506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-999803 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-999803 -n old-k8s-version-999803
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-999803 -n old-k8s-version-999803
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-475388 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:16:58.788946  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-475388 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m0.670154587s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-475388 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6ee7b788-1c64-4a9b-b6bf-1df35a83ec1d] Pending
helpers_test.go:344: "busybox" [6ee7b788-1c64-4a9b-b6bf-1df35a83ec1d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6ee7b788-1c64-4a9b-b6bf-1df35a83ec1d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.002950321s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-475388 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-475388 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-475388 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013811352s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-475388 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-475388 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-475388 --alsologtostderr -v=3: (12.018288609s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388: exit status 7 (76.683072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-475388 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-475388 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-475388 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m38.77938217s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-639161 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29e8c376-511b-45df-be6c-d16c0808cd24] Pending
helpers_test.go:344: "busybox" [29e8c376-511b-45df-be6c-d16c0808cd24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29e8c376-511b-45df-be6c-d16c0808cd24] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004271959s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-639161 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-639161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-639161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.630547302s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-639161 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-639161 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-639161 --alsologtostderr -v=3: (12.639941715s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-639161 -n embed-certs-639161
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-639161 -n embed-certs-639161: exit status 7 (79.240982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-639161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-639161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 12:18:37.084451  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.020868  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.027256  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.038737  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.060186  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.101667  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.183692  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.345480  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:18:59.667155  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:00.309368  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:01.591352  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:04.152773  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:09.274784  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:19.516909  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:19:39.999181  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.102547  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.109104  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.120511  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.141860  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.183270  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.264729  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.426346  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:16.747983  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:17.389876  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:18.671348  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:20.961221  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:21.233804  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:26.355892  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:36.597885  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:20:57.079763  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:21:38.041669  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:21:41.860469  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:21:42.882566  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:21:58.789040  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-639161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m25.893352597s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-639161 -n embed-certs-639161
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rlb9g" [aa9d66e2-f21a-483c-ab13-23741b807273] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0037163s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rlb9g" [aa9d66e2-f21a-483c-ab13-23741b807273] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004106427s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-475388 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-475388 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-475388 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388: exit status 2 (330.886606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388: exit status 2 (307.029274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-475388 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-475388 -n default-k8s-diff-port-475388
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-v8ch5" [8d98eb7d-3e86-41fb-9906-56220b4d7167] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003812275s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-192040 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-192040 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (40.533499546s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-v8ch5" [8d98eb7d-3e86-41fb-9906-56220b4d7167] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004541788s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-639161 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-639161 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-639161 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-639161 --alsologtostderr -v=1: (1.04112213s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-639161 -n embed-certs-639161
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-639161 -n embed-certs-639161: exit status 2 (407.205573ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-639161 -n embed-certs-639161
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-639161 -n embed-certs-639161: exit status 2 (413.714655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-639161 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-639161 --alsologtostderr -v=1: (1.053188211s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-639161 -n embed-certs-639161
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-639161 -n embed-certs-639161
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0127 12:22:59.963940  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m35.268868084s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-192040 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-192040 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.590328172s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-192040 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-192040 --alsologtostderr -v=3: (1.359997602s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192040 -n newest-cni-192040
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192040 -n newest-cni-192040: exit status 7 (120.744589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-192040 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-192040 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-192040 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (18.937906693s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192040 -n newest-cni-192040
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-192040 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-192040 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192040 -n newest-cni-192040
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192040 -n newest-cni-192040: exit status 2 (332.052175ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-192040 -n newest-cni-192040
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-192040 -n newest-cni-192040: exit status 2 (328.604358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-192040 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192040 -n newest-cni-192040
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-192040 -n newest-cni-192040
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.03s)
E0127 12:28:37.084470  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:28:59.021076  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:11.851739  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:11.858211  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:11.869682  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:11.891183  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:11.932699  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:12.014199  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:12.175864  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:12.497957  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:13.140157  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:14.422105  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:16.983593  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:22.105789  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.409483  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.415891  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.427232  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.448712  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.490131  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.571628  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:23.733147  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:24.054921  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:24.696198  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:25.977658  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:28.540452  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:32.347123  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:33.661897  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:43.904061  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/kindnet-106238/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:44.165669  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0127 12:23:37.084787  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/functional-451719/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:23:59.020101  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (54.52481511s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-106238 "pgrep -a kubelet"
I0127 12:24:11.583367  893715 config.go:182] Loaded profile config "auto-106238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-106238 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7ksqs" [99b794ac-4545-4bc1-a67e-d93ca5dd3bc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7ksqs" [99b794ac-4545-4bc1-a67e-d93ca5dd3bc9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004606537s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-106238 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lktq9" [2fbb5fad-7f40-44ef-8154-1544a7926d35] Running
E0127 12:24:26.724902  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/old-k8s-version-999803/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003862221s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-106238 "pgrep -a kubelet"
I0127 12:24:29.830977  893715 config.go:182] Loaded profile config "kindnet-106238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-106238 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-q5wxg" [6d4e25ee-e6fb-4ad4-ba94-aa499e555ddb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-q5wxg" [6d4e25ee-e6fb-4ad4-ba94-aa499e555ddb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005089584s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-106238 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m19.233382485s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0127 12:25:16.102833  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:25:43.805543  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/no-preload-835765/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.088226856s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-106238 "pgrep -a kubelet"
I0127 12:26:02.389501  893715 config.go:182] Loaded profile config "custom-flannel-106238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-106238 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xmdd6" [7fb9178d-af0b-46c9-9666-a6d550da1ab4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xmdd6" [7fb9178d-af0b-46c9-9666-a6d550da1ab4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003951531s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ldgxj" [b3e949b7-2865-49ce-b670-2d10b579862e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004494671s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-106238 "pgrep -a kubelet"
I0127 12:26:10.601074  893715 config.go:182] Loaded profile config "calico-106238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-106238 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7p84w" [e71a90cb-5128-4449-bf61-3c7091f69548] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7p84w" [e71a90cb-5128-4449-bf61-3c7091f69548] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005266883s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-106238 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-106238 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m18.631586523s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0127 12:26:58.789156  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/addons-033618/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.302729  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.309175  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.320563  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.342777  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.384845  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.466212  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.628446  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:00.950721  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:01.592008  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:02.873406  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:05.435074  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:10.557104  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:20.799035  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:41.281446  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/default-k8s-diff-port-475388/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.863968234s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-88lh5" [d10f059b-651c-49ae-9b74-1cd91645c262] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004655536s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-106238 "pgrep -a kubelet"
I0127 12:27:55.511784  893715 config.go:182] Loaded profile config "flannel-106238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-106238 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4gqr7" [10a3dc41-f5ca-4274-8a7f-58fe3a1806a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4gqr7" [10a3dc41-f5ca-4274-8a7f-58fe3a1806a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004222737s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-106238 "pgrep -a kubelet"
I0127 12:27:56.219983  893715 config.go:182] Loaded profile config "enable-default-cni-106238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-106238 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jsgpx" [031341ba-2473-47f3-84fb-fdd0fb710a28] Pending
helpers_test.go:344: "netcat-5d86dc444-jsgpx" [031341ba-2473-47f3-84fb-fdd0fb710a28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003885047s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-106238 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-106238 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-106238 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m12.243930305s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-106238 "pgrep -a kubelet"
I0127 12:29:44.996528  893715 config.go:182] Loaded profile config "bridge-106238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-106238 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vmdbg" [5577e607-8653-47e9-a443-d7cbdac3171c] Pending
helpers_test.go:344: "netcat-5d86dc444-vmdbg" [5577e607-8653-47e9-a443-d7cbdac3171c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vmdbg" [5577e607-8653-47e9-a443-d7cbdac3171c] Running
E0127 12:29:52.829289  893715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20318-888339/.minikube/profiles/auto-106238/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003482965s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-106238 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-106238 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (29/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-962669 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-962669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-962669
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-205778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-205778
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-106238 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-106238" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-106238

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-106238"

                                                
                                                
----------------------- debugLogs end: kubenet-106238 [took: 4.639758524s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-106238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-106238
--- SKIP: TestNetworkPlugins/group/kubenet (4.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-106238 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-106238" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-106238

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-106238" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-106238"

                                                
                                                
----------------------- debugLogs end: cilium-106238 [took: 5.051976398s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-106238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-106238
--- SKIP: TestNetworkPlugins/group/cilium (5.24s)

                                                
                                    
Copied to clipboard